[GENERAL] Name and surname of main applicant
Dominik Tilman
[GENERAL] Are you delivering this project as an individual or as an entity (whether formally incorporated or not)
Entity (Incorporated)
[GENERAL] Please specify how many months you expect your project to last (from 2-12 months)
8
[GENERAL] Please indicate if your proposal has been auto-translated into English from another language
No
[GENERAL] Summarize your solution to the problem (200-character limit including spaces)
Develop an alternative proposal evaluation system for Catalyst that includes a new peer review process and AI support to improve the efficiency, accuracy and fairness of proposal evaluations.
[GENERAL] Does your project have any dependencies on other organizations, technical or otherwise?
No
[GENERAL] If YES, please describe what the dependency is and why you believe it is essential for your project’s delivery. If NO, please write “No dependencies.”
No dependencies.
[GENERAL] Will your project’s output/s be fully open source?
Yes
[GENERAL] Please provide here more information on the open source status of your project outputs
All project outputs, including developed software and documentation, will be fully open source under the Apache 2.0 or MIT License.
[SOLUTION] Please describe your proposed solution
While Project Catalyst has grown immensely, the existing reviewing infrastructure has not kept pace with its expansion, leading to bottlenecks in:
- Maintaining trust in review outcomes.
- Ensuring consistent quality and relevance of proposal reviews.
→ Findings of a fund 8-10 analysis revealed, that there is currently no correlation between feasibility rating of proposal and project outcomes: https://twitter.com/DominikTilman/status/1780904135379849643
Our proposed solution to the current limitations within the Project Catalyst review system involves a comprehensive redesign of the reviewing framework.
1. Objectives:
- The main objective is to create a review system that recognises and measures the value and potential of proposals based on objective criteria and reliable subjective expert opinions, so that voters in Project Catalyst have credible decision-making support for the selection of proposals.
- Secondary objectives:
- To improve the scalability of the evaluation system to be able to handle a growing number of proposals.
- Building a flexible and modular framework that allows customisation of inputs and outputs.
2. Key components include:
- New Peer-Review Process: We will develop a protocol that combines various elements into a new and improved peer-review process. This includes the following components:
- Domain-specific experts: The protocol will incorporate domain-specific expertise, ensuring that each proposal is evaluated by individuals who not only understand the broader context, but are also experts in the relevant field. This approach recognises the importance of expertise in the evaluation of innovative proposals.
- Two-track review mechanism: The protocol will utilize both community input and expert assessments. This dual approach aims to balance broad, democratic community participation with the nuanced, in-depth insights of experts. In this way, we can utilize the unique strengths of both groups to achieve a more comprehensive assessment of the proposals.
- Panel Reviews: We create the possibility to integrate panel reviews, as is already successfully used in some institutional grant programmes and is already being tested in other grant programs with promising results.
- AI Support for Reviewers and Voters: In order to keep track of the growing number of active and past projects and the increasing number of project submissions, we will experiment with various AI technologies to support the review process. This includes the following:
- Incorporating Graph Databases: We will utilize graph databases to analyze connections and track records within the Catalyst ecosystem. This will allow reviewers to visualize project interdependencies and historical performances, offering a clear, data-driven foundation for evaluating the strength and impact of new proposals.
- Experimentation with LLMs: We will experiment with large language models (LLMs) to assess whether proposals meet basic requirements and to train these models specifically for the Catalyst context. This initiative will explore the potential of AI to assist in preliminary proposal screenings, aiming to increase efficiency and consistency in the review process.
- Development of dynamic algorithms: In order to create a fair and reliable valuation system from the various components mentioned, we develop and experiment with different methods and approaches for the following areas:
- Proposal Ratings: The above inputs from the peer review process with AI support should generate reliable ratings for proposals.
- Reputation scores for reviewers: The protocol will make it possible to track the accuracy and quality of their evaluations. This will allow the system to give more weight to those who have given consistently thoughtful, high quality evaluations.
- Reward system: An incentive system will be proposed to reward those who provide consistently valuable insights and evaluations. This system will be designed to motivate sustained and high-quality contributions.
- Framework for Impact Assessments: An integrated framework for impact assessments will systematically track and measure the real-world efficacy of funded projects by combining objective and subjective evaluations. This approach provides a more holistic view of a project's effectiveness, using objective data for baseline metrics and subjective insights to contextualize and explain these metrics, thus addressing the limitations inherent in each method separately:
- Objective Evaluation: Involves quantifiable metrics such as data on user engagement, financial reports, or statistical analyses. This type of evaluation provides a solid foundation of hard facts that can help in measuring direct and tangible impacts of a project.
- Benefits: Provides clear, measurable, and verifiable data that can help in making rational and unbiased decisions.
- Application: Useful in early stages of assessment to establish baseline impacts.
- Subjective Evaluation: Encompasses qualitative assessments such as expert opinions, interviews, and surveys. This approach can capture the nuanced impacts that are not easily quantifiable but are equally important for a comprehensive evaluation.
- Benefits: Captures the depth of the project’s impact, including community perception, satisfaction, and other intangible benefits.
- Application: Critical in understanding long-term effects and in contexts where impacts are more about changes in conditions or perceptions.
3. System Architecture Overview:
1. Presentation Layer:
- User Interface (UI): Web and mobile interfaces for community members, experts, and administrators.
- Dashboards: Visual tools to display proposal data, review progress, and impact assessments.
2. Application Layer:
- Review Management System: Manages proposal submissions, reviewer assignments, and review tracking.
- AI Processing Module: Provides AI-driven preliminary screenings and large language model (LLM) assessments.
- Graph Database Interface: Visualizes project relationships and historical performance.
- Reputation and Reward System: Tracks reviewer performance and manages incentives.
- Impact Assessment Engine: Integrates objective and subjective evaluation data for comprehensive impact analysis.
3. Data Layer:
- Graph Database: Stores relationships between projects, proposers, reviewers, and outcomes.
- Relational Database: Stores structured data like user profiles, proposal details, and review records.
- Data Lakes: Stores unstructured data including historical proposal texts and reviews.
4. Integration Layer:
- APIs to facilitates data exchange between system modules and external systems.
- Message Queue: Communication between components.
System Flow Overview
- Proposal Submission: Proposers submit proposals via a portal. We access this submission via API (e.g. Catalyst Voices).
- Reviewer Assignment: Proposals are matched with reviewers based on expertise. Reviewers access assigned proposals through a dashboard.
- Review Process: Community members and experts submit evaluations. AI tools assist with preliminary screenings and summarizations.
- Data Analysis: Graph database visualizes project relationships; impact assessment engine evaluates project impacts.
- Reputation and Rewards: Reviewer performance is tracked, scores are updated, and high-quality reviewers are rewarded.
- Reporting and Visualization: Dashboards display proposal summaries, review progress, and impact metrics for decision-making.
[IMPACT] Please define the positive impact your project will have on the wider Cardano community
The proposed alternative proposal review system will have a transformative impact on the Cardano community by enhancing the efficacy, accuracy, and inclusivity of the Project Catalyst funding process. By integrating sophisticated analytical tools and a refined review mechanism, our system will directly contribute to the better alignment of proposal evaluations with actual project outcomes, thus optimizing the allocation of resources within the community.
Impact Metrics:
- Significant Correlation Between Proposal Ratings and Project Outcomes: We aim to demonstrate a strong correlation between the improved proposal ratings under the new system and the success rates of funded projects, thereby increasing the predictability and reliability of funding decisions.
- Quality of Reviews: Target a 50% improvement in review feedback quality, ensuring that feedback is more constructive, insightful, and actionable, which will aid proposers in refining their projects.
- Reviewer Engagement: Increase active reviewer participation by 50%, thereby enriching the review process with a wider array of perspectives and expertise, which enhances the overall decision-making process.
- Project Success Rate: Enhance the success rate of funded projects by ensuring that only the most viable and impactful projects are selected, thereby maximizing the effectiveness of allocated funds.
[CAPABILITY & FEASIBILITY] What is your capability to deliver your project with high levels of trust and accountability? How do you intend to validate if your approach is feasible?
Know-how and Partnerships:
TrustLevel has a proven track record of successful participation and contribution in the voting and reviewing processes across multiple platforms, including Project Catalyst, Arbitrum, and SingularityNet, over the last few years. Our experience has equipped us with a deep understanding of the nuanced challenges and specific requirements of effective voting and reviewing systems.
We have active collaborations with the following teams in Cardano:
- Lidonation: Reputation-Scores for Catalyst Reviewers (https://projectcatalyst.io/funds/11/cardano-use-cases-concept/reputation-scores-for-catalyst-proposers-and-reviewers-by-lidonation-and-trustlevel)
- Photrek: Development of a community tool for voting calculations and community engagement in SingularityNet (https://proposals.deepfunding.ai/graduated/accepted/ed600af3-885c-45bc-a874-56d2dde371ce)
- SidanLab and MeshJS: Smart Contract Development (https://projectcatalyst.io/funds/11/cardano-open-developers/aiken-open-source-smart-contract-library-by-meshjs-and-trustlevel)
Validation of Feasibility:
- Pilot Testing: We will conduct a series of pilot tests to refine the system, using real-world data and scenarios to ensure that the system performs as intended.
- Feedback Loops: Implement feedback mechanisms to gather insights from the community and the Catalyst team, which will be used continuously to improve the system.
Contributions and Publications:
We have published several articles and pots detailing our findings and insights into the proposal review process, which underscore our expertise and innovative approach to enhancing governance and grant systems:
[PROJECT MILESTONES] What are the key milestones you need to achieve in order to complete your project successfully?
We propose a 8-month timeline for implementation of all milestones, assuming a start date of August 1, 2024.
Milestone 1: Research Analysis of Different System Components (Month 1-2)
Description:
This milestone will focus on analysing research findings and best practices for all components of the new proposal review system. This research analysis will provide a solid foundation for the development of a robust and scalable system tailored to the needs of Project Catalyst as well as other grant programmes.
Output:
- A detailed report summarizing the findings, challenges, and opportunities associated with each system component.
Deliverables:
- Research Findings on System Components: Comprehensive documentation of research outcomes for peer-review processes, reputation scoring mechanisms, incentive strategies, and impact assessment methods.
- Summary of Proposal Reviewing Requirements: Document outlining the specific needs and criteria for the proposal reviewing system within Catalyst.
- Community Engagement Report: Summary of feedback and suggestions collected from the Catalyst community regarding the proposal review system.
Acceptance Criteria:
- A comprehensive research report detailing findings on each system component.
Evidence of milestone completion:
- Published documents, specification and code on Github repository.
Milestone 2: Architecture and Design of the System (Month 3-4)
Description:
This milestone is dedicated to developing the design for the new proposal reviewing system, including:
- Protocol for the peer-review process that includes domain-specific expertise, a two-track review mechanism, and panel reviews to enhance the evaluation quality of proposals.
- Dynamic algorithms to calculate proposal ratings, track reviewer reputation scores, and develop an incentive system for rewarding high-quality contributions.
- Framework design for impact assessments that combines objective and subjective evaluations, providing a holistic view of the real-world efficacy of funded projects.
Output:
- A finalized architecture diagram that provides a visual and functional blueprint of the entire system.
- Detailed design documents for each component of the system, including data flow diagrams and interface designs.
Deliverables:
- System Architecture Documentation: Complete documentation of the system’s architecture, outlining structural details, component interactions, and integration points with existing Catalyst platforms.
- Component Design Documents: Detailed design documents for each system component, including technical specifications, operational parameters, and user interaction flows.
Acceptance Criteria:
- Complete system architecture documentation.
- Detailed design documents for each component of the system.
Evidence of milestone completion:
- Published documents, specification and code on Github repository.
Milestone 3: Prototyping of the New System (Month 5-6)
Description:
This milestone focuses on translating the detailed designs from the previous phase into a functional prototype for the new review system that will serve as a working model to validate the designs and assess the feasibility and effectiveness of each system component within a controlled environment. This includes:
- Technical prototype including backend and simple UI
- Experiment with and integrate AI technologies such as graph databases and large language models (LLMs) to support the review process, enhancing the ability to manage and evaluate increasing submissions effectively.
Output:
- Fully functional prototypes for each major component of the system.
- Comprehensive testing reports that detail each prototype and document any issues or challenges encountered during the prototyping phase.
Deliverables:
- Prototype Development Documentation: Detailed records of the development process for each prototype, including code repositories.
- Functional Prototypes: Deployable versions of each system component, ready for internal testing.
- Testing Reports: Documents summarizing the testing methodologies, results, and any modifications made to the prototypes based on testing outcomes.
Acceptance Criteria:
- Functional prototype
- Testing reports summarizing issues encountered during prototyping.
Evidence of milestone completion:
- Published documents, specification and code on Github repository.
Milestone 4: System Refinement and Community Testing (Month 7)
Description:
This milestone focuses on refining the system based on feedback from the initial prototyping phase. We will update the prototypes to address any identified issues and then conduct extensive community testing. The goal is to ensure that the system not only meets technical specifications but also aligns with user needs and expectations, thereby ensuring high usability and satisfaction.
Output:
- Enhanced and updated versions of the system prototypes that incorporate improvements based on initial testing feedback.
- A comprehensive community testing report that details user feedback, identifies usability issues, and provides performance data.
Deliverables:
- Updated Prototypes: Revised prototypes that address feedback from the initial tests.
- Community Testing Report: A detailed report that summarizes the feedback from community testing, analyzes usability data, and suggests further improvements.
Acceptance Criteria:
- Updated system prototypes incorporating improvements.
- Community testing report detailing user feedback, usability issues, and performance
Evidence of milestone completion:
- Published documents, specification and code on Github repository.
Milestone 5: Catalyst Integration and Project Closeout (Month 8)
Description:
The last milestone is about readiness for the integration of Catalyst. This phase ensures that the system can be integrated into the existing infrastructure. Ideally, Catalyst Voices is already available, but if not, the finalisation includes the provision of appropriate APIs and/or the user interface to make the system usable for Catalyst. The project concludes with a comprehensive close-out process that includes detailed evaluations, reports and a final presentation to stakeholders.
Output:
- A fully operational system that is ready to integrate with Project Catalyst (or Catalyst Voices).
- A detailed final project report that includes implementation details, integration outcomes, challenges encountered, lessons learned, and future recommendations.
- A project closeout presentation that summarizes the project journey, achievements, and next steps.
Deliverables:
- Integration Plan and Documentation: Detailed plans and documentation outlining the steps and processes used for integrating the system.
- Final Project Report: A comprehensive report that provides a full overview of the project from initiation to completion.
- Project Closeout Presentation: A final presentation prepared for stakeholders to review the project outcomes and discuss future directions.
Acceptance Criteria:
- Final Project Report: The final report must be comprehensive and provide a clear and accurate account of the project, receiving approval from the project oversight committee.
- Closeout Presentation: The closeout presentation must effectively communicate the project outcomes and future recommendations and receive positive feedback from stakeholders.
- Documented Future Recommendations: Recommendations for future enhancements or related projects must be well-documented and agreed upon by the project team and stakeholders.
Evidence of milestone completion:
- Published documents, specification and code on Github repository.
- Submitted Report and Video.
[RESOURCES] Who is in the project team and what are their roles?
TrustLevel was founded by Dominik Tilman with the vision to develop methods and protocols to make the reliability of data and information measurable. Since then, we have received various grants (Cardano, SingularityNet, Arbitrum) and projects that have continuously improved our knowledge and developed tools and enabled us to provide better reviewing and voting processes and systems in decentralised communities. All our outputs are open-source.
TrustLevel Core Team:
- Dominik Tilman: Project Lead
- Dominik has been actively engaged since Fund 3 in Project Catalyst and involved in multiple funded projects. All proposals are either successfully delivered or on track. He is founder of TrustLevel.io & Conu21.com and has 15+ years experience in innovation management and company building.
- Links: https://www.linkedin.com/in/dominikstumpp/; www.trustlevel.io; www.conu21.com
- Roman Preuss: Full-Stack Developer
- https://de.linkedin.com/in/roman-preuss-0b145558
- Josch Rossa: Full-stack developer and LLMs
- https://github.com/josros
- Alex Ramalho: Full Stack AI Developer
- https://alexramalho.dev
- Sergey K.: ML + Blockchain Developer
- https://www.linkedin.com/i
Support of Photrek Team:
- Juana Attieh product lead at Photrek, and lead of Photrek’s SNET circle. Co-founder of Cardano MENA and LALKUL Stake Pool. Interim chair at the Membership and Community Committee at Intersect. Advisor at AMLOK.tech. Juana is committed to fostering decentralized governance and contributing towards optimal solutions for self-organizing systems. With her work, Juana seeks to reimagine societies, unlock untapped potential, and provide inclusive opportunities to those who need them most.
- Dr. Kenric Nelson is Founder and President of Photrek, LLC which is developing novel approaches to Complex Decision Systems, including dynamics of cryptocurrency protocols, sensor systems for machine intelligence, robust machine learning methods, and novel estimation methods. He served on the Cardano Catalyst Circle governance council and is leading a revitalization of Sociocracy for All’s work circle. Prior to launching Photrek, Nelson was a Research Professor with Boston University Electrical & Computer Engineering (2014-2019) and Sr. Principal Systems Engineer with Raytheon Company (2007-2019). He has pioneered novel approaches to measuring and fusing information. His nonlinear statistical coupling methods have been used to improve the accuracy and robustness of radar signal processing, sensor fusion, and machine learning algorithms. His education in electrical engineering includes a B.S. degree Summa Cum Laude from Tulane University, a M.S. degree from Rensselaer Polytechnic Institute, and a Ph.D. degree from Boston University. His management education includes an Executive Certificate from MIT Sloan and participation in NSF’s I-Corp.
Support of Lidonation Team:
- Darlington Wleh
- Links: https://www.linkedin.com/in/profd2004/
- Darlington is an engineer by day, at all times, a dancer, humanitarian, idealist, and entrepreneur. Darlington is a Cardano Ambassador, who contributes by writing articles, podcasting, hosting Twitter Spaces and live meet-ups. Darlington has deep knowledge about blockchain technology and broad experience in the Cardano ecosystem. Darlington have been helping and leading architecting and delivering of software solutions to small and medium size enterprises for the last 15+ years.
Bounty Program for Community Contributions:
We would like to encourage the Catalyst community to participate in the creation of this alternative reviewing system, therefore a substantial part of each milestone is reserved for community bounties, that will be managed via Dework.
[BUDGET & COSTS] Please provide a cost breakdown of the proposed work and resources
Total Budget in ADA: 200,000 ADA
Detailed Breakdown of Milestones Costs:
Milestone 1: Research and Best Practice Analysis of Different System Components
- 40% TrustLevel
- 40% Photrek
- 20% Bounties
Total Cost: 40,000 ADA
Milestone 2: Architecture and Design of the System
- 40% TrustLevel (Peer-Review and Impact Assessment Framework)
- 40% Photrek (Calculations of Proposal Ratings, Reputation Scores, Rewards)
- 20% Bounties
Total Cost: 50,000 ADA
Milestone 3: Prototyping of the New System
- 50% TrustLevel (Backend, UI, Knowledge Graph & LLM)
- 50% Lidonation (Catalyst Data Integration)
Total Cost: 60,000 ADA
Milestone 4: System Refinement and Community Testing
- 50% TrustLevel
- 50% Bounties
Total Cost: 20,000 ADA
Milestone 5: System Readiness* and Project Closeout
Total Cost: 30,000 ADA
*As the current Catalyst system will be replaced by Catalyst Voices in the near future, we will reserve 10% of the proposal budget, so that this newly created review system can be implemented in Project Catalyst.
[VALUE FOR MONEY] How does the cost of the project represent value for money for the Cardano ecosystem?
Strategic Investment: The investment in this project represents substantial value for money by providing crucial improvements to the Catalyst review system, which is foundational to the funding and development of new Cardano ecosystem projects. Improved review processes will enhance project selection quality, directly influencing the efficiency and effectiveness of resource allocation within the Cardano community.
Expertise and Complexity: Costs reflect fair compensation for specialized skills in data analysis and software development. The budget allocations align with prevailing rates in the industry, determined by the experience and skill set of professionals.
Risk mitigation: As a team, we willingly accept the currency risk of being paid in ADA, demonstrating our commitment and adaptability in a dynamic cryptocurrency environment. A decrease in the ADA price is a risk we bear, while any increase allows us to expand the scope.