Last updated 3 months ago
How can we trust the output of an AI system especially one using external data without revealing the data, the model, or the user’s inputs?
Build a privacy-preserving RAG dApp that verifies AI predictions off-chain using Midnight’s Compact smart contracts and zero-knowledge attestations.
Please provide your proposal title
Proof of Inference for Private AI on Midnight
Please specify how many months you expect your project to last
3
Please indicate if your proposal has been auto-translated
No
Original Language
en
What is the problem you want to solve?
How can we trust the output of an AI system especially one using external data without revealing the data, the model, or the user’s inputs?
Supporting links
Does your project have any dependencies on other organizations, technical or otherwise?
No
Describe any dependencies or write 'No dependencies'
No dependencies
Will your project's outputs be fully open source?
Yes
Please provide here more information on the open source status of your project outputs
The project will be fully open source, hosted on GitHub with public issues, milestones, and documentation. Code for the RAG service, ZK prover, and Compact contracts will be released under the Apache 2.0 license, allowing reuse, modification, and commercial adoption while preserving attribution thus encouraging community collaboration and future Midnight dApp innovation.
Please choose the most relevant theme and tag related to the outcomes of your proposal
AI
What is useful about your DApp within one of the specified industry or enterprise verticals?
This dApp directly addresses a growing pain point across regulated industries how to harness AI for decision support or customer engagement without exposing sensitive data or unverifiable outputs.
Industry Context
Modern AI systems, especially large language models (LLMs), are being rapidly adopted across finance, healthcare, legal, and enterprise data intelligence. However, these systems face two fundamental adoption barriers:
The proposed dApp, Private RAG Inference, provides a verifiable, privacy-preserving AI inference framework built atop the Midnight network’s zero-knowledge and Compact smart-contract environment.
Core Value Across Verticals
Finance:
Healthcare & Life Sciences:
Legal / Document Intelligence:
Performs RAG-style summarization or precedent retrieval over confidential documents.
Outputs are provably linked to the source corpus, reducing hallucination risk and supporting discovery compliance.
Enterprise Knowledge Management:
Broader Ecosystem Utility
Within the Midnight ecosystem, this DApp:
In short, it creates a bridge between enterprise AI needs and Web3 privacy guarantees illustrating a practical, fundable use case for Midnight in one of the most active technology domains today.
What exactly will you build? List the Compact contract(s) and key functions/proofs, the demo UI flow, Lace (Midnight) wallet integration, and your basic test plan.
We will build a reference DApp demonstrating private, verifiable AI inference using a Retrieval-Augmented Generation (RAG) pipeline and Midnight Compact contracts for proof verification.
Core Components
(A) Off-Chain Components
RAG Service
Prover / Attestation Engine
API Gateway
(B) On-Chain Components — Compact Contracts
We plan to deploy two Compact contracts:
VerifierContract.compact
Verifies incoming proofs or signed attestations.
Functions:
AttestationRegistry.compact
Records verified attestations in Midnight’s shielded ledger (private) or as attestation NFTs (public).
Functions:
(C) Frontend Demo UI
React + TypeScript app with three tabs:
Workflow:
Includes Developer Mode panel for test inputs and manual proof submission.
(D) Wallet Integration
Uses Lace SDK to:
(E) Testing Plan
Testing will follow open-source best practices:
Unit Tests
Validate retriever ranking, hashing correctness, circuit computations, and Compact contract logic (mock proofs).
Integration Tests
E2E pipeline: user input → proof generation → transaction submission → on-chain verification → attestation retrieval.
Privacy & Security Tests
Confirm no raw text leaves the local RAG service.
Fuzz invalid proofs and replay attacks on VerifierContract.
Performance Tests
Measure average proof generation time (<10 s target for simple predicates).
Monitor on-chain transaction costs and finality times on devnet.
Documentation Tests
Scripts to rebuild environment and rerun tests; ensures reproducibility for other developers.
Deliverables Summary
Source Code Repository Structure :
At completion, anyone will be able to run:
npm install ; npm run dev
to query the RAG service, generate a proof, and verify it on the Midnight devnet.
How will other developers learn from and reuse your repo? Describe repo structure, README contents, docs/tutorials, test instructions, and extension points. Which developer personas benefit, and how will you gauge impact (forks, stars, issues, remixes)?
1 - Developer Learning and Reuse Strategy
This open-source repository will provide a modular foundation for privacy-preserving AI systems. The architecture will enable developers to use components independently or integrate them together, supporting diverse adoption patterns from learning to production deployment.
2 - Repository Structure and Modularity
The repository will be organized as a modular monorepo with four primary components designed for standalone use: an AI service for document processing, a proof generation system for cryptographic attestations, smart contracts for on-chain verification and private storage, and a frontend reference implementation. Each component will include documentation, test suites, and dependency management, enabling developers to adopt individual pieces without requiring the full stack. This modular design will support multiple adoption patterns, ensuring the repository will serve as both a complete reference implementation and a library of reusable components.
3 - Documentation and Learning Resources
Comprehensive documentation will support multiple learning paths through architecture documentation, setup guides, integration examples, API references, and contract documentation. Each component will include dedicated documentation covering overview, features, setup, usage examples, configuration, testing, and integration patterns. This documentation-first approach will ensure developers can understand and use the system without extensive code reading, lowering adoption barriers.
4 - Testing Infrastructure
The repository will include comprehensive test coverage: unit tests, integration tests, privacy tests, performance tests, and contract tests. Automated test orchestration will enable running all test suites with a single command, ensuring reliability and making it easy for contributors to verify changes.
5 - Extension Points and Customization
The codebase will provide clear extension points. The AI service will support custom embedding models, retrieval algorithms, and language model providers. The proof generation system will allow custom circuit designs, witness formats, and alternative proof systems. Smart contracts will support additional functions, custom privacy logic, and new contract types. The frontend will support custom components, different wallet integrations, and alternative frameworks. Integration patterns will enable standalone usage, microservice architectures, and custom workflows.
6 - Developer Personas and Use Cases
The repository will serve 7 primary developer personas. Privacy-focused AI developers will benefit from a complete pipeline with privacy guarantees and zero-knowledge proof integration, enabling medical AI, financial services, and confidential document processing. Blockchain developers will gain real-world smart contract examples. Zero-knowledge proof engineers will access complete proof generation workflows. Full-stack developers will find an end-to-end reference implementation. AI engineers will discover production-ready retrieval systems. Open source contributors will benefit from clear guidelines and modular architecture. Researchers will receive a complete implementation for studying privacy-preserving AI.
7 -Impact Measurement Framework
Tracking how many people star, fork, or watch the project to gauge general interest and visibility.
Looking at how often issues and PRs are created, how fast they get resolved, and how many people contribute—showing how active and engaged the community is.
Measuring how often the project is actually used—such as package downloads, container image pulls, or references in other projects.
Tracking views of docs or tutorials to understand how much users rely on learning materials.
Monitoring how many active forks or custom versions of the project exist, indicating how often people adapt or build on it.
Assessing things like communication activity (e.g., discussions, comments) and how diverse the contributors are, reflecting the vitality of the community.
8 - Value Proposition
This repository will provide a unique combination of privacy-preserving AI capabilities, zero-knowledge proof integration, and blockchain-based verification in a single, well-documented, modular codebase. The open-source license will ensure maximum reuse and collaboration. Comprehensive documentation and testing infrastructure will lower barriers to adoption. Modular architecture will enable flexible integration patterns. Clear extension points will facilitate customization for diverse use cases. The repository will serve as both a complete reference implementation and a library of reusable components, enabling developers to learn best practices, integrate components into existing systems, extend functionality for new use cases, and contribute improvements back to the community.
Please describe your proposed solution and how it addresses the problem
The core problem is that AI systems today require users to share sensitive data with an inference provider and provide no cryptographic guarantee that the AI followed the correct retrieval or reasoning process. This limits adoption in finance, healthcare, legal services, and enterprise intelligence, where privacy, auditability, and verifiable correctness are mandatory. Users want strong assurances that their data stays private and that AI outputs can be trusted without exposing inputs, documents, or model internals.
Our proposed solution is an open-source Private RAG Inference DApp that combines off-chain AI computation with on-chain verification to deliver private, trustworthy predictions. The system uses Retrieval-Augmented Generation (RAG) to process confidential user input and private documents off-chain via a local vector database and a small, open-source LLM. Instead of sending user data or retrieved text to the blockchain, it generates a cryptographic attestation either a signed commitment or a simple zero-knowledge predicate that proves the AI used the correct retrieval context and produced an answer consistent with the committed data. This attestation contains only hashed or committed values, never raw content.
A Midnight Compact contract verifies the attestation and records the result privately in Midnight’s shielded ledger. This ensures the user preserves full confidentiality while still proving that the AI inference was performed correctly and against an approved dataset. The approach solves the trust problem by offering a verifiable record of correctness and solves the privacy problem by never revealing sensitive inputs or documents to the chain or to other parties.
This architecture off-chain inference with on-chain proof verification is intentionally chosen because it is both practical and scalable. Running LLMs or retrieval pipelines on-chain is computationally infeasible, but verifying signatures, commitments, or simple ZK predicates is efficient and fits Midnight’s design. This method provides a realistic pattern for building regulated AI workflows where decisions must be provably compliant, reproducible, and private.
The solution demonstrates how Midnight can serve as the trust layer for AI: enabling private inference, verifiable retrieval, and cryptographically accountable outputs for any industry that requires proof without disclosure. By publishing the full implementation as open-source, the project provides developers with a reusable reference architecture that can be adapted for risk scoring, medical triage tools, compliance review systems, or enterprise document agents, anywhere privacy and trust are essential.
Please define the positive impact your project will have on Midnight ecosystem
This proposal delivers a direct and meaningful impact on the Midnight ecosystem by providing one of the first fully working reference dApps that demonstrates how to combine private AI inference with verifiable on-chain attestations using Compact contracts. It showcases a high-value pattern off-chain AI + on-chain proof verification that aligns perfectly with Midnight’s mission to enable privacy-preserving, compliance-friendly applications. By publishing the project as open-source with clear documentation, tutorials, and a modular architecture, it significantly lowers the barrier for developers who want to build private, trustworthy AI workflows on Midnight.
The dApp provides a concrete example of how to use Midnight’s ZK-friendly ledger, Compact contract system, and Lace wallet integration in a real end-to-end application. This practical demonstration accelerates developer onboarding and helps teams quickly understand how to build privacy-first solutions on the platform. It offers reusable components proof generation, attestation verification, data commitment structures, and UI patterns that other builders can adapt to finance, healthcare, legal, and enterprise automation use cases.
By illustrating a compelling, privacy-centric AI use case, the project strengthens Midnight’s positioning as the ideal chain for private, compliant, verifiable AI. It supports ecosystem growth by inspiring new developer projects, Catalyst submissions, hackathon entries, and startup prototypes. The resulting repository and documentation will serve as a learning resource, a building block for more advanced ZK workflows, and a reference for Midnight’s growing community.
In short, the proposal expands Midnight’s developer ecosystem, demonstrates a key platform capability, accelerates adoption, and establishes Midnight as a leading environment for trustworthy AI applications.
What is your capability to deliver your project with high levels of trust and accountability? How do you intend to validate if your approach is feasible?
This project can be delivered with high levels of trust and accountability because it follows a transparent, open-source development model, uses well-understood engineering patterns, and is scoped to a realistic 3-month roadmap with clear, testable milestones. The workflow RAG inference, hashing/commitment schemes, signature verification, and simple ZK predicates is technically mature and widely used in modern AI and blockchain systems. Each component will be developed in a modular way with public code commits, automated tests, milestone demos, and regular documentation updates to ensure all work is auditable and verifiable.
Our capability to deliver is strengthened by established expertise in AI engineering, retrieval pipelines, cryptographic attestation design, smart contract development, and full-stack dApp integration. The project’s architecture keeps complexity manageable by using off-chain AI and on-chain verification avoiding unrealistic or experimental requirements such as on-chain model execution. This ensures predictable progress and reduces implementation risk.
Feasibility will be validated through a combination of incremental development, test-driven implementation, and end-to-end devnet trials. Each milestone includes unit tests, contract tests, integration tests, and security fuzz tests to confirm correctness. The prototype will be deployed on the Midnight devnet to confirm real transaction flows, Lace wallet integration, and Compact contract behavior. This provides concrete proof that the design works within Midnight’s current capabilities. Community feedback, issue tracking, and repo activity provide additional validation.
By building openly, testing continuously, and demonstrating working code on devnet, the project ensures high accountability and feasibility from start to finish.
Please provide a cost breakdown of the proposed work and resources
The total requested budget is $9,000 across three milestones, each scoped for a one-month delivery window and designed to represent a realistic, low-risk reference dApp implementation on Midnight devnet. All work is delivered open-source under Apache 2.0 and includes engineering, testing, documentation, and project management overhead.
Milestone 1 — Foundations: Private RAG & Attestation Prototype
Work Scope & Resource Allocation
This milestone establishes the core off-chain AI pipeline, hashing and attestation logic, and base repository structure. The work requires an AI/ML engineer, backend engineer, and light devops support.
Resource Breakdown:
What the Cost Covers
This milestone builds the computational “engine” that the rest of the project relies on.
Milestone 2 — Compact Contract, UI, and Wallet Integration
Work Scope & Resource Allocation
This milestone connects the attestation system to Midnight’s Compact smart contracts and provides a minimal but functional UI for user interaction and verification.
Resource Breakdown:
What the Cost Covers
This milestone demonstrates end-to-end integration with the Midnight ledger and ecosystem tooling.
Milestone 3 — Predicate ZK Circuit + End-to-End Hardening
Work Scope & Resource Allocation
This milestone adds the optional ZK predicate pathway, hardens the entire system, expands security tests, and finalizes onboarding documentation.
Resource Breakdown:
What the Cost Covers
This milestone ensures the solution is robust, private, verifiable, and developer-friendly.
Budget Summary
Milestone 1
Milestone 2
Milestone 3
Total
How does the cost of the project represent value for the Midnight ecosystem?
The proposed $9,000 budget represents strong value for money because it delivers a fully functional, open-source reference dApp that demonstrates private AI inference with verifiable attestations on Midnight. Each milestone is tightly scoped to produce tangible outputs : working code, test suites, and developer documentation ensuring that every dollar directly advances the ecosystem’s adoption and utility.
Milestone 1 ($3,000, Month 1): Establishes the core RAG pipeline, chunking, hashing, and attestation system. This foundational work produces a reusable off-chain AI service that developers can extend or integrate into other privacy-preserving applications. By implementing robust unit tests and example attestations, the ecosystem gains a reference implementation of private inference without exposing user data. The investment ensures that subsequent milestones build on a stable, verifiable foundation.
Milestone 2 ($3,000, Month 2): Connects the off-chain AI system to Midnight Compact contracts and the Lace wallet, demonstrating end-to-end interaction on the devnet. This milestone produces deployable contracts, a minimal React UI, and devnet deployment scripts, providing the ecosystem with a working example of smart contract integration for privacy-preserving AI. The cost directly translates to assets (contracts, UI, scripts) that developers can fork, modify, and reuse, accelerating ecosystem growth and lowering the barrier to entry.
Milestone 3 ($3,000, Month 3): Adds optional ZK predicate verification, privacy tests, security fuzzing, and full documentation. This ensures the dApp is robust, secure, and fully auditable critical for trust in private AI applications. The deliverables ZK circuits, prover scripts, end-to-end test logs, and an extension guide maximize ecosystem value by creating reusable, production-relevant patterns for any developer building privacy-first applications on Midnight.
Overall, $9,000 over 3 months funds a compact, high-impact prototype that provides multiple reusable assets: code, contracts, UI, scripts, tests, and documentation. These outputs are immediately usable by other developers, lowering onboarding friction and demonstrating best practices for privacy-preserving AI on Midnight. Every milestone directly strengthens the ecosystem’s capabilities, offering long-term value far exceeding the initial investment.
By delivering open-source, well-documented, and tested software, the proposal ensures that the ecosystem gains a reference architecture that can be adapted, extended, and scaled, giving Midnight developers a proven pattern for combining AI, privacy, and on-chain verification an outcome that offers sustained, ecosystem-wide benefits relative to cost.
I confirm that the proposal clearly provides a basic prototype reference application for one of the areas of interest.
Yes
I confirm that the proposal clearly defines which part of the developer journey it improves and how it makes building on Midnight easier and more productive.
Yes
I confirm that the proposal explicitly states the chosen permissive open-source license (e.g., MIT, Apache 2.0) and commits to a public code repository.
Yes
I confirm that the team provides evidence of their technical ability and experience in creating developer tools or high-quality technical content (e.g., GitHub, portfolio).
Yes
I confirm that a plan for creating and maintaining clear, comprehensive documentation is a core part of the proposal's scope.
Yes
I confirm that the budget and timeline (3 months) are realistic for delivering the proposed tool or resource.
Yes
I Agree
Yes
Stephen Whitenstall - Project Lead Developer
Stephen has over 30 years experience in development. Is currently developing a RAG system for SingularityNet Ambassdor's Archive (https://github.com/SingularityNET-Archive/Archive-RAG ) and has understanding of zero-knowledge proofs (as evidenced by this Python simulation https://github.com/stephen-rowan/zero-knowledge-identity-system ). Consequently he has assimilated the accessible Typescript language of Midnight’s Compact contracts with ease.
He has already prototyped an integration of LLM-based retrieval pipelines with off-chain proof generation, building secure backend services, deploying and testing Compact smart contracts, and creating a functional UI connected to the Lace wallet.
With a background as an IBM Test Programme manager he has strong capabilities in software prototyping, development and testing, including privacy checks and fuzz testing, along with extensive experience in open-source documentation and repo organization.