Voters are overwhelmed. Proposals receive roughly equal scrutiny. Reviews are not meaningful because community reviewers are not adequately incentivised to conduct deep critical investigations.
R&D and experimental implementation of several alternative assessment/QA models for Catalyst funds distribution, one of them being the 2-stage proposal assessment process,
This is the total amount allocated to Alternative Assessment Process R&D and Experimental Implementation.
No other applicant
Catalyst Research team/IOG for access to data on voter behaviour and surveying!
Timing of the next Catalyst fund, fund 11.
The project will be fully open source
This proposal seeks to research and develop several competing assessment models in parallel and rigorously test the quality of output that each of them produces.
Many have been proposed in community discussions, but it is unclear what the relative merits of each one of these are until they have been empirically and experimentally tested in the field.
Some of the alternative mechanisms that would be researched and developed are discussed in the following Tally blog post written by the lead proposer Simon Sällström and Jan Ole Ernst (PhD Quantum physics, Oxford). These include Holographic voting and conviction voting.
Problem
Solution
We propose to divide the proposal assessment into two stages. In the first stage, proposal assessors check proposals against a list of very well-defined proposal requirements and indicate the domain expertise required to assess the proposal in more depth. Proposal assessors must also give one point of constructive criticism that provides specific examples and actionable suggestions for positive change at this stage. To proceed to the second stage, a proposal needs to satisfy 80% of the requirements.
The second stage is the qualitative assessment stage. It takes inspiration from the scientific article referee review system. The qualitative assessments have several objectives. First, to provide quality assurance on the first-stage assessments. Second, to provide a concise and easily understandable summary of the proposal. Third, to thoroughly investigate and critically engage with all aspects of the proposal. Fourth, to provide constructive feedback to the proposer. Fifth, to make a recommendation to “fund”, “revise and resubmit”, or “not fund”. A revision of the PA model does not necessarily have to be implemented as part of the 2-stage proposal assessment model.
Summary
It will improve the efficiency of the use of funds and help voters make better, more well-informed, decisions regarding who should win grants.
Problems with the current process
A thorough review of all issues with the current Catalyst process is beyond the scope of this proposal but we will focus on the ones which we believe the present proposal addresses. The problems are
First, the current system is not well suited for specialisation and division of labour. Some proposals are subpar in the sense that they do not present certain basic elements such as measurable KPIs or some analysis of existing solutions to the problem that is to be solved. Checking for the existence of these necessary components is easy but critically engaging with most of them and providing constructive feedback is difficult. Such critical analysis often requires domain expertise. It will most likely also require the person to be a native (or near-native) English speaker and to have excellent writing skills. There are few people who possess all of the above and those who do will typically have very good outside options in the form of very well-paid jobs. Their time is very valuable and it is a waste of resources to have them read through incomplete proposals. Given their high skill and outside options, a very high remuneration rate will be needed to properly incentivise them.
Second, lack of feedback. In the current system, getting feedback from proposal assessors is not easy. Generally, proposers comment on other proposals hoping to get comments on their proposal in return. However, accessing the group of proposal assessors is much harder. Proposal assessors are generally busy and although some proposers post in the Proposal Advisor Telegram chat and receive, if they are lucky, some feedback, this is the exception rather than the rule. The system does not have a mechanism to incentivise feedback that the proposers can incorporate into their proposal in the same funding round.
Third, lack of incentives to conduct a thorough analysis. Each proposal in a funding round is allocated a budget that is sufficient to reward 3 ‘good’ assessments and 2 ‘excellent’. An excellent assessment receives 3x the reward of that of a ‘good’ assessment. In the past rounds, around 3% of assessments were rated as ‘excellent’. The problem is that many Proposal Assessors (PA) who make cost-benefit calculations arrive at the conclusion that it is more profitable to aim to write ‘good’ assessments. If we assume that attempting to write an excellent assessment takes three times as long as writing a ‘good’ assessment, then the PA should be indifferent between focusing on these two tasks. However, due to a combination of (a) very high ‘excellent assessment’ standards and (b) it is not clear what constitutes an excellent assessment, many Proposal Assessors no longer attempt to write excellent assessments. Instead, they focus on writing ‘good’ assessments where the time invested leads to a more predictable reward. This reduces the overall value that proposal assessors provide to the voting community.
Fourth, there’s a lack of capacity. The sheer volume of proposals means that proposal assessors cannot properly invest time into investigating and engaging with existing proposals to increase their quality. Similarly, voters are overwhelmed by the sheer number of proposals and will only be able to express their preferences on a small subset of the proposals submitted. By adding more quality filtering mechanisms we address both of these problems, thereby improving the quality of the guidance provided and voting decisions being made.
Fifth, each proposal is currently allocated the same budget in terms of Proposal Assessor rewards. This is a waste of resources since there are many proposals that are not complete by objective standards. At the moment, the budget available for each of the ~1000 proposals submitted is around $440.Our hypothesis is that a proposal that doesn’t contain sufficient information for it to be properly assessed can be discarded after only a 15-minute skim. Even if every proposal is skimmed in this manner by seven independent Proposal Assessors remunerated at $30/hour (western Europe white-collar hourly rate), this discovery process would in total only cost $52. Assuming that 100-200 proposals (10-20%) would not have passed this basic objective checklist threshold, then under the assumptions stated above, we would save between $58,200-$116,400; funds which could instead be used to incentivise thorough analysis, interviews and investigation of remaining proposals. This is indeed what we propose here.
The main idea would be to produce a ranking of proposals based on the alternative assessment process, and then conduct qualitative surveys with voters and proposers.
Proposers
Voters
We may include more questions. This is part of the preparation and research agenda for this.
Public channels for ongoing updates
Closeout
Why best suited?
Processes to manage funds
The main goals of the project can be summarized as follows:
This overarching goal is broken down into sub-components
To validate the feasibility of the proposed approach, the project can employ the following validation methods:
By combining these validation methods, the project can gather evidence and insights to determine the feasibility and viability of the proposed approach to improving the proposal and assessment process in Project Catalyst.
Milestones for the project:
Milestone 1: Project set-up and Recruitment of team (Month 1-2)
Milestone 2: Research and Development (Month 3-4)
Milestone 3: Experimental preparation (Month: 3-4)
Milestone 4: Alternative 1 technical development (Month: 3-4)
Milestone 5: Alternative 2 and 3 development (Month 4-5)
Milestone 6: Sample Selection (Month 5)
Milestone 7: Hypothesized Outcome Validation Alternative 1 (Month 5-7)
Milestone 8: Hypothesized Outcome Validation: alternative 2 (Month 5-7)
Milestone 9: Hypothesized Outcome Validation: alternative 3 (Month 5-7)
Milestone 10: Hypothesized Outcome Validation: alternative 4 (Month 5-7)
Milestone 11: Evaluation Criteria Establishment (Month 7)
Milestone 12: User feedback analysis (Month 8)
Milestone 13: Final Analysis and Report Write-up (Month 9-10)
Milestone 1: Job description and Project Set up
Milestone 2: Experimental Implementation
Milestone 3: Sample Selection
Milestone 4: Hypothesized Outcome Validation
Milestone 5: Evaluation Criteria Establishment
Milestone 6: Comparison of User Feedback and Objective Cutoff
Milestone 7: Final Analysis and Reporting
Personnel Costs
Subtotal: $140,000
Equipment and Supplies:
Subtotal: $2000
Participant Recruitment and Compensation
Subtotal: : $42,000
Travel and Accommodation:
Subtotal: $3300
Marketing and Outreach: $2000
Legal and Admin: $3000
Contingency: $5000
TOTAL USD = $197,500
TOTAL ADA = 790000 ADA
*assuming 0.25 USD = 1 ADA
High-level justification
So far about $31m has been distributed. It is not known how much of this can be considered "wasted" or money that has gone into scam proposals, nor to what extent this could have been avoided with a better assessment process.
The Cardano treasury, at its peak, held around $1 trillion. If Cardano is to succeed, investing in properly developing well-researched and evidenced-based assessment methodologies for how it will use funds will give returns that far exceed the cost of an individual project.
Rather than trying to quantify this, let us illustrate with just a small number of proposals. If this research endeavour could eliminate funding for even just 5 proposals like this then the investment pays for itself.
The following proposals were funded and at the time of writing, I was not able to find functioning outputs from this proposal (daocoders.net not working)
DAO-NET: DAO Deployment Platform | Lido Nation English
DAO-NET: Legal Defense DAO | Lido Nation English
DAO-NET: Multilingual Translation | Lido Nation English
DAO-NET: Auditor DAO | Lido Nation English
DAO-NET: Small Developer Funding | Lido Nation English
The amount received by this proposer amounts to about $200k in total. This is merely one example, probably amongst the biggest ones. There are others. Investing in a good process is worth the effort.
Justification for recruiting full-time researcher and person for technical implementation
To rigorously and experimentally evaluate different governance mechanisms, we need to have highly qualified individuals. It may be difficult to recruit such individuals on mere 6-month-long contracts, whereas 12-month contracts are more likely and feasible.
Salaries
The indicated salaries are below market rates for highly skilled professionals in the UK but above that of academic earnings (post-docs) and entry-level positions (e.g. at Big 4 consultancy). For a person with 3-5 years of professional experience, the salary is unlikely to be competitive, but our hope is that the autonomy, remote work and interesting job description will incentivise highly qualified applicants to seek out this position.
Legal structure
The people on this team would formally be contractors with special contracts that are adapted to the conditions of the Catalyst grant funding milestones and conditions. Legal advice will be sought out as part of the first deliverable.
Simon Sällström. Project Manager. Key responsibilities will be to recruit a researcher to lead this work and facilitate communication between relevant stakeholders.
Research lead [to be recruited]. A person with PhD in a relevant empirical field, or 3+ years of relevant empirical research.
Technical specialist [to be recruited]. Technical Specialist with relevant background to deploy solution on continuous testnet and assist with data collection and analysis.