essay · on the architecture · 6 min
why the matching layer is physically blind to photos.
The first thing to say is that the choice is not subtle. The matching engine does not 'deprioritize' photos. It does not 'hide' them in a UI sense and run them under the hood. It has no path to them. The function that takes two profiles and returns a similarity score is typed in a way that makes a photo impossible to pass in. If you tried, the build would fail.
That sentence is the entire pitch.
I want to write down why this is the design, because it took me a year of looking at the consumer-app ecosystem to realize the question is not 'should photos appear later in the funnel' but 'should the model be allowed to see them at any point during ranking.' Those two questions sound similar. They are not.
what the apps actually optimize
The dating apps you know rank profiles. That much is obvious. What is less obvious is what they are ranking on. The product manager will tell you they rank on 'compatibility' and 'engagement' and a few other words that mean different things to different people. The model, internally, is doing something simpler. It is learning the function attractiveness then next-action probability. Because photos dominate the attention budget on the selection surface, the model that survives the A/B tests is the one that gets very good at predicting which photos a person will pause on.
This is a clean, narrow, defensible objective. The companies that built around it are correct that it works as a business. They are wrong, in a way I find hard to be polite about, that it is what the users wanted.
What the users wanted was to find their people. The model they actually got finds them faces.
the local fix that doesn't work
A natural first instinct is to leave the model intact and adjust the surface. Hide photos in the inbox. Blur them on the profile card. Reveal them after three messages. Several apps have tried this. It does not change the inbound. The matching engine is still ranking faces; it is just removing them from one view. The people you see in your queue are still the people the model thinks are pretty for someone with your taste, where 'your taste' is itself a learned object from your past selection behavior. You have not escaped. You have moved the eye-mask.
I built a version of that for a week. The week was useful. The week is why this essay exists.
the actual fix is a type signature
Here is what I changed.
function rank(viewer: Profile, candidate: Profile): numberProfile carries text answers to five short prompts, a 30-second voice clip (transcribed and kept as text), a small set of intent flags (friendship, relationship, community), and metadata. No photo. There is no field for it. The photo lives in a different table, behind a different read path, gated by a mutualVibe boolean. The function above cannot reach the table.
The embedding step takes the text and produces a 1536-dim vector. Cosine similarity does the work. Two soft rerankers (ideology distance, shared-passion overlap) break ties. None of them have ever, in any version of the code, taken pixels.
You can read this in the public repo. It is github.com/donnowyu/soulmate-core. The function I quoted is the entry point. Type errors compile-fail if you try to thread an image through.
why typing it matters more than disabling it
A flag can be flipped. A type signature, in a real codebase with a build pipeline, will collapse the system if it is loosened. This is the same reason banks use foreign-key constraints instead of a 'remember not to insert orphan rows' memo. A constraint that lives in the artifact survives. A constraint that lives in the team's good intentions does not.
I wanted the photo-blindness to survive my own future temptations. There are many lines of code I will write in the next year. Some of them, on tired evenings, will look at the engagement charts and propose a quiet little A/B test that 'leverages photo signal as a secondary input.' A type-level constraint catches that draft before it merges. A README warning does not.
what people actually feel
The thing I did not expect, and the thing I want to write about more carefully in a later essay, is how the feeling changes downstream of the structural choice. When you know the system has never ranked you on your face, you stop performing for it. The voice clip becomes a small private thing you say into your phone about a thing you love. The prompts get longer and quieter. The replies, on the other end, tend to be from people who chose to answer the words.
I will write more about that next week. For now, this essay is about the architectural decision. The thing I want you to leave with is this: a dating app that physically cannot see a face is not the same product as a dating app that conventionally hides faces. The first is a different machine. We built the first.
If you want to try it, it is at byvibration.com. If you want to read the engine, the link is above. If you want to talk about why this matters, the comment box is below.
If this argument lands, the cadence companion is at byvibration.com/essays/letters-mode-is-mercy. For the user-facing exhaustion this architecture is trying to address, see byvibration.com/essays/why-dating-apps-feel-exhausting.
I work on byvibration. The matching engine, soulmate-core, is open source. Both pieces exist because I spent a year on dating apps and concluded that the problem was structural, not behavioral, and that someone had to write the version where the structure itself was different.