1 min readfrom Machine Learning

Can frontier AI models actually read a painting? [R]

I wrote up a small experiment on whether frontier multimodal models can appraise art from vision alone.

I tested 4 frontier models on 15 paintings worth about $1.46B in total auction value, in two settings:

  1. image only
  2. image + basic metadata

The main thing I found was what I describe as a recognition vs commitment gap.

In several cases, models appeared able to identify the work or artist from pixels alone, but that did not always translate into committing to the valuation from the image alone. Metadata helped some models a lot more than others.

Gemini 3.1 Pro was strongest in both settings. GPT-5.4 improved sharply once metadata was added.

I thought this was interesting because it suggests that for multimodal models, “seeing” something and actually relying on what is seen are not the same thing.

Would be curious what people think about:

  • whether this is a useful framing
  • how to design cleaner tests for visual reliance vs textual reliance
  • whether art appraisal is a reasonable probe for multimodal grounding

Blog post: https://arcaman07.github.io/blog/can-llms-see-art.html

submitted by /u/ShoddyIndependent883
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#rows.com
#frontier AI models
#multimodal models
#art appraisal
#recognition vs commitment gap
#image only
#image + basic metadata
#visual reliance
#textual reliance
#metadata
#Gemini 3.1 Pro
#GPT-5.4
#auction value
#painting valuation
#visual grounding
#cleaner tests
#artwork identification
Can frontier AI models actually read a painting? [R]