top of page

*

SignArt

Art That Everyone Can Feel.

스크린샷 2025-08-28 오후 4.02.04.png
스크린샷 2025-08-28 오후 4.02.46.png

Making art accessible,
one sign at a time.

About Image.jpg

Project Overview

I created a website that merges my passions for art and sign language to make artistic experiences more accessible to the Deaf community.

 

The platform allows users to upload an artwork and view a video of me interpreting its emotional essence in sign language—for instance, if an artwork conveys “joy,” the site pairs it with a video of me singing that emotion.

 

The idea grew out of my three years in my school’s Sign Language Club, where learning from a deaf teacher taught me both the richness of the language and the challenges of communication barriers, as well as my personal connection to hearing differences through my father, who is deaf in one ear. 

My interest deepened further during my internship at BB&M Contemporary Gallery, where I realized how often museums, galleries, and digital art spaces lack meaningful resources for Deaf audiences, leaving out the immediacy of experiencing art in their own language.

 

While my project is still in its early stages, I am working toward expanding it into fuller sign language interpretations that describe both the artwork and the emotions it evokes. Ultimately, my goal is to bridge the gap between art and accessibility, using technology and creativity to break down barriers and foster a more inclusive experience. 

Turning Colors into Gestures.

About Image.jpg

Project
Deep Dive:

How I'm Using AI to Break Down Barriers in Art Appreciation

The Start of the Project: A Question Sparked in a Small Gallery

This project began with a simple question that arose from a small gallery. A student on my team was working as an intern at a mid-sized gallery and, amidst the brilliant artworks, noticed a disappointing reality. Unlike major museums, most of these smaller galleries had virtually no resources for the hearing-impaired, such as specialized docents, sign language interpretation, or other barrier-free content. The profound emotions and stories that art conveys should be open to everyone, but an invisible barrier was limiting this experience for an entire community.

This experience gave me a crucial inspiration. "What if technology could become the bridge to tear down this barrier?" I asked myself. "What if anyone, anywhere, could have an experience equivalent to a professional docent using only their smartphone?" To answer these questions, I decided to launch a project that uses AI to "translate" visual art into a language accessible to the hearing-impaired: sign language and intuitive text.

Core Technology and How It Works: The Process of 'Translating' Art

The core of my system goes beyond simple image recognition; it's about understanding and converting the emotional and narrative context of an artwork. The process is broken down into three main stages.

About Image.jpg

Stage 1:

Deep Analysis with a Multimodal AI (GPT-4o)

For the analysis engine, I chose OpenAI's latest multimodal AI model, GPT-4o. The reason I selected GPT-4o is its exceptional ability to understand images on a near-human level, not just text. This model can go beyond identifying objects in a painting to holistically analyze the harmony of colors, the stability of the composition, the texture of the brushstrokes, and the unique mood and emotion that these elements create together.

When a user uploads an image, it's sent to my server and relayed to GPT-4o. At this point, I use meticulously designed Prompt Engineering to ensure the AI performs at its best. I instruct the AI to act as an "expert art critic" and to return its findings strictly in a structured JSON format. I designed this JSON object to contain the following specific information:


title & artist: Objective information about the artwork's name and creator.

analysis: A concise summary of the artwork's background, style, and meaning in 3-4 sentences.

emotion: The single most dominant emotion that permeates the piece.

question: A thought-provoking question designed to encourage the user to think more deeply about the artwork.

About Image.jpg

Stage 2:

Plutchik-Based Emotion-to-Sign Language Mapping

To translate the abstract emotions analyzed by the AI into high-fidelity sign language videos, I adopted psychologist Robert Plutchik's 'Wheel of Emotions' theory as the theoretical backbone of my emotion-to-sign-language mapping system. I constrained the AI's emotion output to one of the eight primary emotions defined by Plutchik: joy, trust, fear, surprise, sadness, disgust, anger, and anticipation.

This academic approach not only ensures the consistency of the AI's analysis but also plays a crucial role in accurately linking each emotion to its corresponding professional sign language video. For example, if the AI identifies the emotion as "Fear," the system instantly calls the video for "Fear" from the pre-built "Digital Sign Language Emotion Library" and displays it to the user.

About Image.jpg

Stage 3:

Delivering an Integrated User Experience (UX)

Finally, the user's screen displays a unified result of all this analysis and conversion. The ① original artwork image remains at the center, while the ② sign language video conveying the core emotion autoplays beside or below it. Simultaneously, the ③ detailed text analysis and thought-provoking questions generated by the AI are also presented.

Impact and Vision: Beyond the Present, Toward Full Sign Language Translation

I believe this project is a significant first step in breaking down the barriers to art appreciation. However, I am also keenly aware of the current system's practical limitations. Translating the entire in-depth analysis for countless works of art into sign language videos is a task that requires immense time and resources. Therefore, in the current version, I chose an approach that delivers the emotional core of the artwork in the most immediate and intuitive way: translating the "primary emotion" into sign language, while supplementing it with text for detailed descriptions and further reflection.

But my ultimate goal is to provide the entire AI-generated text analysis in full, coherent sign language sentences, allowing hearing-impaired users to receive the same depth of information as someone listening to an audio guide. To achieve this, I am actively considering the implementation of future technologies:

1. Real-time Sign Language Generation via 3D Avatars:

This method involves using Natural Language Processing (NLP) to convert the AI's text analysis into a formal sign language structure (notation). This data would then drive a realistic 3D avatar that performs the signs in real-time. This technology could integrally express not only hand gestures but also the crucial facial expressions and body language that are part of sign language, enabling much more natural communication.

2. Generative AI for Sign Language Video Synthesis:

This approach would utilize a cutting-edge Text-to-Sign-Language Video model, trained on a massive dataset of sign language videos. By simply inputting the text analysis, the AI would generate a brand-new, photorealistic video of a person signing the content, just like a real-life interpreter. This offers limitless scalability, as it can translate novel sentences that don't exist in a pre-recorded database.

By keeping pace with technological advancements and actively researching these methods, I aim to evolve this project from simple emotion translation to the complete narrative translation of art. The ultimate vision I am working toward is a world where technology makes art a truly universal language for everyone.

How to use

01

Go to the Upload page.
Look for “Upload Artwork” on the homepage.

02

Prepare your image
Use a clear JPG/PNG.
Crop out frames or backgrounds if possible for best results.

03

Upload the artwork (or take a photo)
Click Upload or snap a picture with your camera. Wait for the artwork preview to appear on the left.

04

View the sign interpretation
A video of me signing the core emotion appears on the right.

05

Use Play, Pause, or Replay as needed.

bottom of page