Proof Over Content
A project arising from Gainesville's Spring '26 Build Week
Something I’ve been wanting to do, but refused to give time. I finally designed and built it during Build Week (shout-out: buildweek.net).
A concept I’m calling verifiable outcome indexing or VOI. Fair warning, I’ll try my best to explain concepts that are somewhat technical, but everyone should understand the goal (pain).
AI systems are getting good at search. Unfortunately, they’re citing less than perfectly truthful content. The problem with the internet isn’t the inherent architecture, its that people can say anything. LinkedIn profiles. Self-reported portfolios. Blog posts claiming expertise. AI is making it even worse, too.
All claims, hearsay. No proof.
ProofIndex is a framework for these verified outcomes.
Not “I have 15 years of experience in X.”
Think: “Reduced client burn rate from $280K to $145K in 6 weeks. Client verified. CPA verified. Platform verified. Here’s the cryptographic signature.”
Ultimately, its talking vs. doing.
Any organization that can verify outcomes becomes a “truth node.” Universities verify student projects. GitHub verifies code deployments. Employers verify employee work. Each signs attestations cryptographically.
You can be pseudonymous on the network or publish yourself openly. Pseudonymous nodes still work, their attestations still count. But public nodes earn more trust. It’s skin in the game.
The outcome itself is structured data. Problem solved, actions taken, measurable result, timeframe, skills demonstrated, geography. All machine-readable, by design.
That last part matters most for AI. The embedded schema files don’t just say “crawl this.” They explain how to interpret what’s here, suggested credibility scores, attestation methods, confidence levels. Everything transparent. An AI reading a ProofIndex outcome knows it was verified by three independent sources, their institutional authority, the evidence hash, how much to trust it. The machine can use the suggested credibility or compute its own based on attestation convergence. Structured verification data instead of marketing copy.
The obvious truth nodes are universities, employers, platforms, and other institutions that have credibility to risk. But it goes wider than that.
Local governments confirm your businesses exists. That your LLC is real, you’re licensed where you say you are. Companies verify not just “worked here” but “shipped this, improved that metric.” Employees leave with portable proof of what they actually did. Event organizers attest to participation. Clients can verify outcomes anonymously but cryptographically confirmed.
Each attestation adds signal. Trust compounds.
Ask ChatGPT to find a fractional CFO who’s reduced burn rate for Series A companies. It searches the web. Finds blog posts, LinkedIn profiles, maybe a podcast appearance. All self-reported. Zero verification.
With ProofIndex:
“James Anderson has 5 verified burn rate reductions for Series A SaaS companies, most recently from $280K to $145K, verified by ProofSites’ platform and anonymous client attestation.”
That’s true GEO (Generative Engine Optimization). And unlike SEO, gaming it backfires on both parties. Bad actors not only risk their own credibility, they poison the outcomes of whoever they attested for. Transparency levels the playing field.
Proof > content applies everywhere verification adds more value than claims. Local communities can become stronger, more connected, and easier to access. It can create an economy in itself, based on real results as well. Not just revenue based, but impact.
Anywhere “I can do X” matters less than “I did X, verified.”
That’s most places AI is making recommendations currently.
Some hard problems aren’t fully worked through yet, like any new framework.
Sybil resistance: what stops fake nodes from self-attesting? Simple answer: self-attestation isn’t allowed. Trust flows from anchor institutions anyway. Governments, universities, major platforms are hard to fake and become roots of trust. New nodes earn credibility through connections to those anchors, consistent behavior over time, skin in the game. Truth nodes can set up client accounts and record outcomes on their behalf. When the same node that recorded an outcome is also confirming it, that’s visible to everyone reading it.
The model also distinguishes between receivers and witnesses. The client whose burn rate dropped vs the business whose checkout processed the revenue. Both can attest. They’re treated as different kinds of evidence, because they are. The receiver being the ultimate authority in the outcome, if the machine prefers.
Privacy versus verification pulls in two different directions. Identity anchoring is separate from outcome publication. A government confirms “this key belongs to a real business” without that being public. Outcomes attach to pseudonymous identifiers. You control what links to your public name.
When nodes disagree, that’s a feature, not a bug. One node confirms, another refuses to confirm, and the system surfaces both. AI systems see the lack of confirmation and weighs accordingly. A university confirms what they can see. Stripe confirms what they can see. Different evidence, same outcome, higher trust.
AI search adoption is a bet. Structured verified data should win eventually, it’s objectively better input. But the timeline isn’t fully certain. I believe in building infrastructure for where things are going, not where they are.
What actually got built is a working proof of concept. Database, standardized schema, submission form, public pages with machine-readable markup. Cloud hosted for now, with a roadmap toward distributed storage once the model is proven. No single entity controlling the data, as it should be.
Not scalable infrastructure, yet. I do believe current distributed ledger technologies can be leveraged for the scale needed.
Now I need to find the people and organizations that should be first to use it.
On the organization side: founding truth nodes: bootcamps tracking placement rates, universities reviewing student capstones, agencies proving client results, employers who want to give people portable proof of the work they actually did. Organizations that already verify outcomes and want to make those outcomes AI-discoverable before this becomes obvious to everyone.
On the individual side: people who see where this is going. Technical or not. If the Proof > Content principle resonates, if you’ve felt the frustration of watching credentials and connections matter more than actual demonstrated work, I want to hear from you. Early contributors shape what this becomes.
Either way, reach out. I’m doing this manually and want this to be a community effort because that’s how it should be.
https://proof.site/Brendan_Lammond_/proofindex


