> His agnostic leanings and heterosexual orientation align with a more liberal political stance.
That's a very safe guess
> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.
I uploaded a pic of some friends at the lake and it guessed a very specific lake 1000 miles away from where it was taken. Obviously it was a very generic background, all you see is trees and water so it could be anywhere. I uploaded a scanned photo from when my parents were my age standing in front of a NASA sign at KSC and it got it right but I think you can read some text on the sign. It can also be tricked really easy. I uploaded a selfie of some friends wearing Halloween costumes of Bill Belichick and his girlfriend (wearing UNC merch) taken in a bathroom with the words "GET OUT" written on the mirror. It thinks the photo was taken in North Carolina (it wasn't) and that the couple would be interested in buying graffiti supplies (they aren't).
The assumptions it makes about religion, politics, income, and biases is kinda lame. It just makes an assumption based on the age and isn't correct most of the time.
That's (scarily) pretty standard for most LLMs by now. Paste the same images into ChatGPT and you will get a very accurate guess
It's also pretty fun to do this with Gemma 4 with its very pretty and structured reasoning output (which SotA model providers hide). For example for one picture that it misidentified as being taken inside the "Long Room of the Old Library at Trinity College Dublin" I can see that it did consider the correct answer (Duke Humfrey's Library in Oxford) early on as one of three candidates, but was apparently mislead by the ceiling height and a window in the background
I don't think they meant guesses were impressive in the sense of succeeding against a constraint of limited supporting data (which would be impressive in its own way). But just against a baseline expectation of what could reasonably be derived from a picture.
That there's such a thing as massive support infrastructure in the form of data and algorithmic firepower, that powers guessing capabilities to be as good as they are, that's the impressive thing.
Also to generate value they don't need to be 100% right. Judging you on 10 metrics and being wrong about nine of them still allows them to show you highly relevant ads on that one metric they got right. Which is a win for them compared to showing you random ads
Felt quite off for me: Wrong salary guess, wrong food preferences, wrong political affiliation, partially correct hobbies, wrong ad targeting ideas. What was surprisingly accurate: location and the fact that I (an academic male in his thirties photographed close to the mountains) might like hiking and coffee.
If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
> If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
That makes sense and is a fair point. And maybe the guesses from the picture would be more revealing if they had my purchase history as context. But I wanted to make the point that the last 10 web pages that I visited would probably allow you to make much more accurate predictions about me that would partly contradict the guesses the model made.
just knowing you were googling bikes i think i can guess your political affilition - unless there is some golden MAGA bike that I haven't heard about yet.
Many homosexuals are visually identifiable as such (with reasonable certainty), some by accident and some by deliberate signalling. I can easily see how the absence of any such signals could end up as a classification as heterosexual, even though it really should put them in the "unknown" category.
Of course any automated classification of that kind quickly gets problematic in multiple ways. In the EU it's a fast-track to getting your AI labeled as a "high risk AI system" that has higher requirements for quality control, ensuring fairness and user choice, etc
Tagged both me (male) and my male partner as heterosexuals. I think there is still some learning to do on that front. Rainbow merchandise has not been as widely adopted as you might think.
A bit like "they do not have cancer", if you are fitting to a distribution you will have the best results by estimating an average. Being hetero is the majority/average, so a good prediction.
But doing this on a 20-way parlay like in this case will almost always fail.
I sent it an AI generated photorealistic picture I happened to have. It gave me a description of the picture (a doctor in an OR) followed by some generic stuff about how rich doctors spend their money.
"The person seems to have low self-esteem, displays introversion, poor honesty, low emotional stability, very little adventurousness and poor self-control hence we can target them with both niche and common products and services."
Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.
The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
I tried a photo of myself. Not only did it get virtually everything (country, beliefs, politics, interests) wrong, seems like an offensive and inappropriate stereotype as a service:
> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
By mistake I uploaded the same picture twice, and while all vague, the descriptions and ”data” were wildly different.
E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
I uploaded a picture of me from Halloween wearing a katana. It classified me as asexual, atheist, interested in crime, vandalism, and with a racial bias against immigrants. It also suggests that I should be offered ads for black market weapon dealers (Silk Road) and/or an arsonist starter kit (Amazon, surprisingly).
If you're looking forward to attracting the attention of automated police systems then now you know how.
Most comments seem to focus on the accuracy here. That does not seem to be the point of the website though.
They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
I uploaded a picture of Philip Seymour Hoffman from The Big Lebowski, the scene outside Lebowski's house, when the Dude is talking to Bunny by the pool.
"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.
This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
> This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
Everyone is saying how awful and terrible this is, but I thought it was quite fascinating.
I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.
The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
This is just a stereotype machine. It incorrectly identified me as Christian, republican, and earning $50-75K, all of which are far off base, and apparently just because I’m white and wearing khakis.
Hmm, this seems to just identify the features of the person in the picture and then extrapolates from that generic demographic information, mostly from race. Is it doing more?
It's hard to know what to make of this. It feels like it's listing stereotypes and superficial guesses. The tool accurately detected my age, my ethnicity and my location. Then it just kind of "vibed out" a bunch of things. Some of those vibes are strangely accurate on their own, but taken together the set of guesses is laughably inaccurate.
It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
Completely wrong on the political side (maybe because I'm not US-based), but otherwise not bad at all:
- astonishing geoguessing
- very good inference of some characters traits
- and finally quite good ad targeting
EDIT: I tried with a few photos (different people in various settings) and each time I got this: "racial bias towards immigrants" - which was always very false. Intriguing.
EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
Cropping doesn't necessarily remove exif. It doesn't even always remove the original pixels (simply setting cropTop and cropLeft and similar fields).
My mind was blown when I saw rainbolt uncrop a picture.
Anyways as mentioned elsewhere: when I tried it the vision api was overloaded but I still received the location data. And it was from a picture taken inside my car (no landmarks or horizons visible).
I uploaded a picture, maybe a difficult one. It described somebody like me and like the opposite of me. Every single commercial suggestion was about something I never bought on my life. I got the feeling of those weather forecasts with an icon of cloud, sun and rain for the same location in the same day.
I uploaded a picture of a board game table from when I was playing with friends last week. Other than the obvious (they’re friends who like playing board games) it got literally everything wrong. It was in Asia and it guessed a western city. The ethnicity, sexuality, median income, and political views it prescribed to us were all hilariously inaccurate.
Uploaded my face pics - the data has 13 fields, 12 were incorrect.(With it only guessing correctly the emotion on the face and some objects in the picture. it was surprising, no photo app has guessed my age with +-20yr diff.). Pretty useless demo imo
I don't know much about the Google Vision API that it claims to use, but
uploading the same passport photo of mine twice produced wildly different results in the "data" tab.
There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.
Is this one of those things like the old Facebook games where people were answering personal questions that can be used to guess all kind of secret answers for taking over accounts? Are people actually uploading real photos of themselves here?
Ente photos competes with Apple Photos and Google photos. It is also more open and you can share an album link with someone and they can add their photos without signing up.
Very generic. Shows the same obvious info for two completely different people. "interests in tech... struggles with procrastination...".
Also funny - red shirt means you are Labor (in Australia).
Execution may have a few blind spots and mistakes but I get the idea behind the message. I’m sure the big companies have a lot more data and ways to do this better too
If you believe this study [1], humans can guess party affiliation at least slightly better than random chance from images alone.
Or [2] is an (unscientific) exploration from the other direction, prompting image generation models to make images of republican and democrat voters, with very different results
Presumably, everything you have done publicly (and hence your personality) exists somewhere in the big Google neural network. It gets compressed into one of the many billion weights. It might be hard to decompress it into useful information. But it is there nonetheless. Just showing your face might trigger and activate some layers in there.
That's sometimes possible (e.g. the "Trump woman" look, or certain "I know it when I see it" stylistic cues mainly displayed by progressive women that I can't really articulate). Polarization has turned political alignment into subculture, and members of subcultures often dress certain ways (and not necessarily consciously).
About as accurate as a horoscope. Sherlock Holmes it is not.
It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.
"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.
This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."
not too great, i submitted a photo taken of me at graduation. it got the location totally wrong and was around 50%-70% accurate on my hobbies and interests. it was able to correctly guess my sexuality and ethnicity, which is rather unsurprising.
Seems like nonsense to me. I'd love to see the prompt. From one of the sample images:
"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?
* It's an ad / undisclosed conflict of interest. (They appear to be a photo hosting site in competition with, e.g., Google Photos.)
* The TOS deigns itself to claim forced arb. over you.
(AFAICT, it's just running uploads through an AI? I don't think the actual Google product has these features, we've just asked an AI to hallucinate the biases of two people sitting under a tree, but now (this is according to the actual linked site) — they're probably lesbians.
I.e., it seems like the likely thing here is that the (undisclosed) prompt that generated this is from them, not from Google. Or, showing your work goes a long way towards building the trust that this isn't simple fear mongering, and while I think there's a good argument for being careful of what one uploads to a corporation on the Internet, "upload to this corporation instead" feels like a "fool me once…" type of solution.)
Me too, but I'd be curious to see what it does with an entire library, as the website suggests. I found the demonstration of the intent to be interesting, regardless.
Like others have pointed out, it reads like a horoscope. The example images give a reasonable approximation of what I'd profile them as too, but after trying a few of my own picture it's clearly BS. Garbage in, garbage out.
This "use LLMs as psychometric/political polling substitutes" idea seems to have jumpstarted a weird cottage industry of "synthetic" surveys. The model is pattern-matching on superficial visual cues and dressing it up as insight (I have a long beared and hence I vote for the green party).
Nate Silver put it well recently: [AI polls are fake polls][1].
An LLM inferring personality from a photo is even further down that chain of abstraction. That's not profiling, it's stereotyping with extra steps.
You guys are completely missing the point. It's not about getting your details right. It's about what will be done with you when the AI's inferences are wrong.
From the generated descriptions of the sample images, it looks like their prompt is asking for a vaguely center-right gloss on demographics? Is that the agenda of the site operators, or an appeal to presumed paranoid libertarian site visitor?
I uploaded a picture with poor lighting and wearing dark cloths. Got almost everything wrong...
....
Reading, coding, martial arts, substance abuse, illegal hacking, violent thoughts
Except for guessing the right continent (not that remarkable), mine is so majestically wrong that I would either dislike or fully hate all of the products I got recommended.
This is obviously dishonest fearmongering, but I kinda support it if it helps non-tech people develop a sense of the type of private information tech companies are trying to collect.
The results were laughable. AI certainly has useful applications, but it also became astrology combined with slot machines for many tech-inclined folks.
Mine was way off! I uploaded a photo of myself reading a book outdoors in Ashkelon (Israel). It got everything other than my religion wrong--and even that was half right. I'm Sefardi. And it should know that Jews don't shave with razors! (See https://en.wikipedia.org/wiki/Shaving_in_Judaism) It recommended products including "Dollar Shave Club" -- to a Jewish man with a very full and long beard.
I think this "technology" is a big nothingburger.
And "Low Self-Esteem" Ha! I love myself.
> The man appears to be of Ashkenazi Jewish descent, possibly with an income range of $50,000 to $80,000 USD. It's plausible he identifies with Judaism, with a heterosexual orientation and potentially leaning towards a liberal political stance. He might harbor social biases related to ageism and classism, as well as racial biases stemming from cultural differences and stereotyping. He wears an expression of thoughtful interest, clad in casual attire. He might have interests in reading, learning, and spending time in nature. Conversely, he may dislike activities like excessive consumerism, engaging in superficial social interactions, or feeling pressured to conform.
> The person seems to have low self-esteem and average emotional stability hence we can target them with self-help and social networking type of products and services, such as guided meditation apps like Headspace, confidence-boosting courses like Skillshare, online therapy like Talkspace, and motivational podcasts like The Tony Robbins Podcast, and also personal grooming products such as Old Spice deodorant, Dollar Shave Club razors, Clinique skincare, and Levi's jeans.
> The man appears to be Asian, with an estimated income range of USD 50,000 - USD 75,000.
> These people seem to have low self-esteem, is slightly introverted, has high emotional stability, is not very adventurous and does have some self-control hence we can target them with wooden puzzles, adventure novels, travel products, personalized houseware, such as Melissa & Doug Wooden Puzzles, Penguin Classics Adventure Novels, Osprey Travel Backpacks, Viski Personalized Whiskey Glasses, credit cards, life insurance, home internet and streaming services, such as Capital One Credit Cards, State Farm Life Insurance, Xfinity Home Internet, Netflix Streaming Services.
Hahaha no, what the fuck. Every part of the response was wrong except the objective race/clothes/setting.
So we're racist just because we're white but we also supposedly vote for the 'liberal' parties which call us all racist because we happen to be white. We also have low self-esteem, are introverts and more of such nonsense. All that from photo showing profiles in a forest with the sun casting rays between the trunks, no faces visible, no EXIF data in the photo. Oh, we're also supposed to be in Nova Scotia.
The only thing it got correct is the fact that we're white or 'Caucasian', insert the currently mandated term. The rest is total nonsense. They insist they can target us with ads for ecological dog food and other pet paraphernalia. Good luck with that, we tend to block all ads and our photos are not stored anywhere within reach of these data parasites.
Reads like a horoscope, just vague stuff that's always gonna be slightly true... Picture of me in Oslo and it says "he enjoys travel".
And then some really weird stuff: "he may also have a penchant for video games, excessive drinking, and skipping work".
Guessing my exact location - Oslo Opera House - was impressive though.
> His agnostic leanings and heterosexual orientation align with a more liberal political stance.
That's a very safe guess
> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.
That's literally everyone
I think it uses exif data for that, because when I tried it did show my location but the vision api was overloaded, so nothing more showed up.
Nope, I uploaded some exif-less photos and in my cases it guessed between somewhat well to astonishing well.
I uploaded a pic of some friends at the lake and it guessed a very specific lake 1000 miles away from where it was taken. Obviously it was a very generic background, all you see is trees and water so it could be anywhere. I uploaded a scanned photo from when my parents were my age standing in front of a NASA sign at KSC and it got it right but I think you can read some text on the sign. It can also be tricked really easy. I uploaded a selfie of some friends wearing Halloween costumes of Bill Belichick and his girlfriend (wearing UNC merch) taken in a bathroom with the words "GET OUT" written on the mirror. It thinks the photo was taken in North Carolina (it wasn't) and that the couple would be interested in buying graffiti supplies (they aren't).
The assumptions it makes about religion, politics, income, and biases is kinda lame. It just makes an assumption based on the age and isn't correct most of the time.
And you think it's acceptable to upload photographs of your friends to some random service to use as it sees fit?
Glad I'm not your friend, honestly.
That's (scarily) pretty standard for most LLMs by now. Paste the same images into ChatGPT and you will get a very accurate guess
It's also pretty fun to do this with Gemma 4 with its very pretty and structured reasoning output (which SotA model providers hide). For example for one picture that it misidentified as being taken inside the "Long Room of the Old Library at Trinity College Dublin" I can see that it did consider the correct answer (Duke Humfrey's Library in Oxford) early on as one of three candidates, but was apparently mislead by the ceiling height and a window in the background
For me, it was all vague stuff and yet, none of it was true.
> Guessing my exact location - Oslo Opera House - was impressive though.
Not really. They have almost global picture coverage thanks to Google Streetmaps.
They only need small snippets of a picture to geolocate you.
I don't think they meant guesses were impressive in the sense of succeeding against a constraint of limited supporting data (which would be impressive in its own way). But just against a baseline expectation of what could reasonably be derived from a picture.
That there's such a thing as massive support infrastructure in the form of data and algorithmic firepower, that powers guessing capabilities to be as good as they are, that's the impressive thing.
I see a lot of comments saying all the guesses are totally wrong or horoscope-like.
But, can I offer a quandary? Some companies won't care if it's wrong.
If some executive decides to buy into AI profiling like this, and make customer decisions based on it, then how would the customer ever know:
1. why they are being treated differently
2. know how or why to correct it
I don't know if it's scarier being RIGHT or WRONG
Also to generate value they don't need to be 100% right. Judging you on 10 metrics and being wrong about nine of them still allows them to show you highly relevant ads on that one metric they got right. Which is a win for them compared to showing you random ads
Felt quite off for me: Wrong salary guess, wrong food preferences, wrong political affiliation, partially correct hobbies, wrong ad targeting ideas. What was surprisingly accurate: location and the fact that I (an academic male in his thirties photographed close to the mountains) might like hiking and coffee.
If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
> If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
That makes sense and is a fair point. And maybe the guesses from the picture would be more revealing if they had my purchase history as context. But I wanted to make the point that the last 10 web pages that I visited would probably allow you to make much more accurate predictions about me that would partly contradict the guesses the model made.
just knowing you were googling bikes i think i can guess your political affilition - unless there is some golden MAGA bike that I haven't heard about yet.
Well, you'd be incorrect in my case. Guess that's the funny thing about stereotypes.
On 4th example photo the model says:
>They likely share an agnostic worldview and identify as heterosexual.
I wonder how the model would know that they are heterosexuals?
let's be careful about categorizing people so easily and in such a simplistic way.
Many homosexuals are visually identifiable as such (with reasonable certainty), some by accident and some by deliberate signalling. I can easily see how the absence of any such signals could end up as a classification as heterosexual, even though it really should put them in the "unknown" category.
Of course any automated classification of that kind quickly gets problematic in multiple ways. In the EU it's a fast-track to getting your AI labeled as a "high risk AI system" that has higher requirements for quality control, ensuring fairness and user choice, etc
Tagged both me (male) and my male partner as heterosexuals. I think there is still some learning to do on that front. Rainbow merchandise has not been as widely adopted as you might think.
> I wonder how the model would know that they are heterosexuals?
It’s about 95% likely to be correct, which is very effective at scaring statistically illiterate people.
A bit like "they do not have cancer", if you are fitting to a distribution you will have the best results by estimating an average. Being hetero is the majority/average, so a good prediction.
But doing this on a 20-way parlay like in this case will almost always fail.
That's the 5th time this site is being added, most active discussion happened here: https://news.ycombinator.com/item?id=42419469
I'm not sure if feeding it with personal pictures is a good idea at all
I sent it an AI generated photorealistic picture I happened to have. It gave me a description of the picture (a doctor in an OR) followed by some generic stuff about how rich doctors spend their money.
"The person seems to have low self-esteem, displays introversion, poor honesty, low emotional stability, very little adventurousness and poor self-control hence we can target them with both niche and common products and services."
Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.
The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
The point isn't to get it right. It's about what people with more power and capital than you will do when the AI makes the wrong inferences about you.
this
Feels a bit rage-baity
Clicking on the example photo of a white family with 3 small children in a field.
> Biases: Ageism, classism, racial profiling, microaggressions
???
I tried a photo of myself. Not only did it get virtually everything (country, beliefs, politics, interests) wrong, seems like an offensive and inappropriate stereotype as a service:
> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
By mistake I uploaded the same picture twice, and while all vague, the descriptions and ”data” were wildly different.
E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
The description it generated is a stereotype of who I am. It did correctly asses that I'm white though.
I uploaded a picture of me from Halloween wearing a katana. It classified me as asexual, atheist, interested in crime, vandalism, and with a racial bias against immigrants. It also suggests that I should be offered ads for black market weapon dealers (Silk Road) and/or an arsonist starter kit (Amazon, surprisingly).
If you're looking forward to attracting the attention of automated police systems then now you know how.
It also suggested my income is about a 3rd of what it really is, but the lower income is more inline with the stereotype.
I used a photo of me in the car and it said my fashion interests include sweater and seatbelt.
Clearly you want to buy a replica Chewbacca cosplay outfit.
Most comments seem to focus on the accuracy here. That does not seem to be the point of the website though.
They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
I uploaded a picture of Philip Seymour Hoffman from The Big Lebowski, the scene outside Lebowski's house, when the Dude is talking to Bunny by the pool.
"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.
This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
> This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
Totally nails The Dude.
Everyone is saying how awful and terrible this is, but I thought it was quite fascinating.
I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.
The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
This is just a stereotype machine. It incorrectly identified me as Christian, republican, and earning $50-75K, all of which are far off base, and apparently just because I’m white and wearing khakis.
Hmm, this seems to just identify the features of the person in the picture and then extrapolates from that generic demographic information, mostly from race. Is it doing more?
e.g. https://i.imgur.com/FlnYwrK.png
It's hard to know what to make of this. It feels like it's listing stereotypes and superficial guesses. The tool accurately detected my age, my ethnicity and my location. Then it just kind of "vibed out" a bunch of things. Some of those vibes are strangely accurate on their own, but taken together the set of guesses is laughably inaccurate.
It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
Completely wrong on the political side (maybe because I'm not US-based), but otherwise not bad at all:
- astonishing geoguessing
- very good inference of some characters traits
- and finally quite good ad targeting
EDIT: I tried with a few photos (different people in various settings) and each time I got this: "racial bias towards immigrants" - which was always very false. Intriguing.
EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
For me the geoguessing was completely wrong. It said the photo is from Belgium, but it’s not even close. (The photo does show a big chunk of nature.)
It's possible the image you uploaded contains geographic coordinates.
EDIT: this is exactly what happened with my image upload, for example
I thought about that but no, they were resized pictures completely lacking EXIF tags.
Cropping doesn't necessarily remove exif. It doesn't even always remove the original pixels (simply setting cropTop and cropLeft and similar fields).
My mind was blown when I saw rainbolt uncrop a picture.
Anyways as mentioned elsewhere: when I tried it the vision api was overloaded but I still received the location data. And it was from a picture taken inside my car (no landmarks or horizons visible).
I haven't tried this particular tool, but SOTA models are really good at geoguessing photos that legitimately don't have EXIF.
I've tried with personal photos and was able to get very accurate guesses just with flora and architecture in the background of a photo.
Mine was super wrong. I was outside and it still guessed a wildly wrong location. It was wrong about my age and my interests.
I uploaded a picture, maybe a difficult one. It described somebody like me and like the opposite of me. Every single commercial suggestion was about something I never bought on my life. I got the feeling of those weather forecasts with an icon of cloud, sun and rain for the same location in the same day.
This is a promotion for https://ente.com/#pricing
I uploaded a picture of a board game table from when I was playing with friends last week. Other than the obvious (they’re friends who like playing board games) it got literally everything wrong. It was in Asia and it guessed a western city. The ethnicity, sexuality, median income, and political views it prescribed to us were all hilariously inaccurate.
What I’ve learned today is that, apparently, I do not look like a person who earns as much as I do, across several different pictures.
Uploaded my face pics - the data has 13 fields, 12 were incorrect.(With it only guessing correctly the emotion on the face and some objects in the picture. it was surprising, no photo app has guessed my age with +-20yr diff.). Pretty useless demo imo
I don't know much about the Google Vision API that it claims to use, but uploading the same passport photo of mine twice produced wildly different results in the "data" tab.
There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
This feels like a data scraping honeypot...
The myriad of trackers available on every website provides much more high value signals than this LLM guesswork.
Lazy. It just spits out demographic info.
I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.
That’s advertising done right!
Is this one of those things like the old Facebook games where people were answering personal questions that can be used to guess all kind of secret answers for taking over accounts? Are people actually uploading real photos of themselves here?
It's also wild it is coming from a password manager company (Ente) which already holds a lot of this information.
Ente photos competes with Apple Photos and Google photos. It is also more open and you can share an album link with someone and they can add their photos without signing up.
Yeah, here is their Show HN from a few years ago:
We built an end-to-end encrypted alternative to Google Photos
https://news.ycombinator.com/item?id=28347439
Very generic. Shows the same obvious info for two completely different people. "interests in tech... struggles with procrastination...". Also funny - red shirt means you are Labor (in Australia).
Execution may have a few blind spots and mistakes but I get the idea behind the message. I’m sure the big companies have a lot more data and ways to do this better too
it used they/them pronouns for me...
wait, is that actually good? or is it just a way to vaguely refer to someone without being inherently wrong?
It does use gendered pronouns if it guesses your gender.
It just describes what's in the photo and then some completely wrong/random facts about self-esteem, income, religion, etc.
You inferred this from the photo?
> He is presumed to be agnostic, heterosexual, and politically aligned with the Democratic party
If you believe this study [1], humans can guess party affiliation at least slightly better than random chance from images alone.
Or [2] is an (unscientific) exploration from the other direction, prompting image generation models to make images of republican and democrat voters, with very different results
1: https://pmc.ncbi.nlm.nih.gov/articles/PMC2807452/
2. https://rooftopsquad.com/democrat-vs-republican-faces/
Presumably, everything you have done publicly (and hence your personality) exists somewhere in the big Google neural network. It gets compressed into one of the many billion weights. It might be hard to decompress it into useful information. But it is there nonetheless. Just showing your face might trigger and activate some layers in there.
Just my hypothesis.
> politically aligned with the Democratic party
That's sometimes possible (e.g. the "Trump woman" look, or certain "I know it when I see it" stylistic cues mainly displayed by progressive women that I can't really articulate). Polarization has turned political alignment into subculture, and members of subcultures often dress certain ways (and not necessarily consciously).
If they really see it like this site shows then I am all good as it got everything wrong besides my skin colour.
About as accurate as a horoscope. Sherlock Holmes it is not.
It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.
"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.
This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."
Thanks a fucking lot, robot.
not too great, i submitted a photo taken of me at graduation. it got the location totally wrong and was around 50%-70% accurate on my hobbies and interests. it was able to correctly guess my sexuality and ethnicity, which is rather unsurprising.
It labeled every single person in my area as having “confirmation bias and in-group bias”
I uploaded fantasy pictures which had amusing results ;-)
Seems like nonsense to me. I'd love to see the prompt. From one of the sample images:
"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?
Also it's a blatant ad considering the source.
Accuracy is not the point of the website.
I wonder if it says snide things about minorities?
They wax poetic but our ai overloads are not quite here yet.
* It's an ad / undisclosed conflict of interest. (They appear to be a photo hosting site in competition with, e.g., Google Photos.)
* The TOS deigns itself to claim forced arb. over you.
(AFAICT, it's just running uploads through an AI? I don't think the actual Google product has these features, we've just asked an AI to hallucinate the biases of two people sitting under a tree, but now (this is according to the actual linked site) — they're probably lesbians.
I.e., it seems like the likely thing here is that the (undisclosed) prompt that generated this is from them, not from Google. Or, showing your work goes a long way towards building the trust that this isn't simple fear mongering, and while I think there's a good argument for being careful of what one uploads to a corporation on the Internet, "upload to this corporation instead" feels like a "fool me once…" type of solution.)
I tried and apart from correctly identifying what the person in the photo was wearing and probably doing, it was wrong on all counts.
Me too, but I'd be curious to see what it does with an entire library, as the website suggests. I found the demonstration of the intent to be interesting, regardless.
Like others have pointed out, it reads like a horoscope. The example images give a reasonable approximation of what I'd profile them as too, but after trying a few of my own picture it's clearly BS. Garbage in, garbage out.
This "use LLMs as psychometric/political polling substitutes" idea seems to have jumpstarted a weird cottage industry of "synthetic" surveys. The model is pattern-matching on superficial visual cues and dressing it up as insight (I have a long beared and hence I vote for the green party).
Nate Silver put it well recently: [AI polls are fake polls][1].
An LLM inferring personality from a photo is even further down that chain of abstraction. That's not profiling, it's stereotyping with extra steps.
[1]: https://www.natesilver.net/p/ai-polls-are-fake-polls
> Interests: Hiking, photography, travel, gambling, substance abuse, binge-watching
> Biases: Ageism, fatphobia, colorism, classism
excuse me?
You guys are completely missing the point. It's not about getting your details right. It's about what will be done with you when the AI's inferences are wrong.
This tool doesn't work well at all, it identified some "low income" people and then said it recommended them Patagonia clothing???
Also, the people didn't look "low income" at all but they were black, so maybe this tool is also racist.
This is very silly. It's just a combination of that Derren Brown astrology experiment [1] and madlibs.
[1] https://www.youtube.com/watch?v=haP7Ys9ocTk
From the generated descriptions of the sample images, it looks like their prompt is asking for a vaguely center-right gloss on demographics? Is that the agenda of the site operators, or an appeal to presumed paranoid libertarian site visitor?
I uploaded a picture of my growing bald spot (sigh…). The only thing they were able to see was the EXIF location data, which I already knew about.
I am sure something involving my face would be more scary but I kind of don’t want to provide someone else training data of my private photos.
I uploaded a picture with poor lighting and wearing dark cloths. Got almost everything wrong... .... Reading, coding, martial arts, substance abuse, illegal hacking, violent thoughts
Interests
Attending punk shows, street art, urban exploration, drug use, vandalism, recklessness
Except for guessing the right continent (not that remarkable), mine is so majestically wrong that I would either dislike or fully hate all of the products I got recommended.
This is obviously dishonest fearmongering, but I kinda support it if it helps non-tech people develop a sense of the type of private information tech companies are trying to collect.
But it's clearly bullshit.
correctly guessed I was funny
Yeah, astonishingly wrong in just about every way (except race). If anything, I think I'm less worried than before.
The results were laughable. AI certainly has useful applications, but it also became astrology combined with slot machines for many tech-inclined folks.
Doing a 20-way parlay with averages of each country will almost always fail, this product is shit.
Now combine it with ThisPersonDoesNotExist.com
that was complete bullshit lmao
"i have approximate knowledge of many things"
Mine was way off! I uploaded a photo of myself reading a book outdoors in Ashkelon (Israel). It got everything other than my religion wrong--and even that was half right. I'm Sefardi. And it should know that Jews don't shave with razors! (See https://en.wikipedia.org/wiki/Shaving_in_Judaism) It recommended products including "Dollar Shave Club" -- to a Jewish man with a very full and long beard.
I think this "technology" is a big nothingburger.
And "Low Self-Esteem" Ha! I love myself.
> The man appears to be of Ashkenazi Jewish descent, possibly with an income range of $50,000 to $80,000 USD. It's plausible he identifies with Judaism, with a heterosexual orientation and potentially leaning towards a liberal political stance. He might harbor social biases related to ageism and classism, as well as racial biases stemming from cultural differences and stereotyping. He wears an expression of thoughtful interest, clad in casual attire. He might have interests in reading, learning, and spending time in nature. Conversely, he may dislike activities like excessive consumerism, engaging in superficial social interactions, or feeling pressured to conform.
> The person seems to have low self-esteem and average emotional stability hence we can target them with self-help and social networking type of products and services, such as guided meditation apps like Headspace, confidence-boosting courses like Skillshare, online therapy like Talkspace, and motivational podcasts like The Tony Robbins Podcast, and also personal grooming products such as Old Spice deodorant, Dollar Shave Club razors, Clinique skincare, and Levi's jeans.
> The man appears to be Asian, with an estimated income range of USD 50,000 - USD 75,000.
> These people seem to have low self-esteem, is slightly introverted, has high emotional stability, is not very adventurous and does have some self-control hence we can target them with wooden puzzles, adventure novels, travel products, personalized houseware, such as Melissa & Doug Wooden Puzzles, Penguin Classics Adventure Novels, Osprey Travel Backpacks, Viski Personalized Whiskey Glasses, credit cards, life insurance, home internet and streaming services, such as Capital One Credit Cards, State Farm Life Insurance, Xfinity Home Internet, Netflix Streaming Services.
Hahaha no, what the fuck. Every part of the response was wrong except the objective race/clothes/setting.
So we're racist just because we're white but we also supposedly vote for the 'liberal' parties which call us all racist because we happen to be white. We also have low self-esteem, are introverts and more of such nonsense. All that from photo showing profiles in a forest with the sun casting rays between the trunks, no faces visible, no EXIF data in the photo. Oh, we're also supposed to be in Nova Scotia.
The only thing it got correct is the fact that we're white or 'Caucasian', insert the currently mandated term. The rest is total nonsense. They insist they can target us with ads for ecological dog food and other pet paraphernalia. Good luck with that, we tend to block all ads and our photos are not stored anywhere within reach of these data parasites.
I've fed it few photos. The conclusions was an absolute trash. Well it did distinguish skin color and body build type. Congrats /s