Last updated 3 months ago

Proof of Inference for Private AI on Midnight

Problem

How can we trust the output of an AI system especially one using external data without revealing the data, the model, or the user’s inputs?

Solution

Build a privacy-preserving RAG dApp that verifies AI predictions off-chain using Midnight’s Compact smart contracts and zero-knowledge attestations.

9,000 $USDM
Total funds requested

About this idea

Team

Stephen Whitenstall - Project Lead Developer

Stephen has over 30 years experience in development. Is currently developing a RAG system for SingularityNet Ambassdor's Archive (https://github.com/SingularityNET-Archive/Archive-RAG ) and has understanding of zero-knowledge proofs (as evidenced by this Python simulation https://github.com/stephen-rowan/zero-knowledge-identity-system ). Consequently he has assimilated the accessible Typescript language of Midnight’s Compact contracts with ease.

He has already prototyped an integration of LLM-based retrieval pipelines with off-chain proof generation, building secure backend services, deploying and testing Compact smart contracts, and creating a functional UI connected to the Lace wallet.

With a background as an IBM Test Programme manager he has strong capabilities in software prototyping, development and testing, including privacy checks and fuzz testing, along with extensive experience in open-source documentation and repo organization.