Last updated a week ago
Dynamic advancements of large language models (LLMs) have changed the entire world in many ways. Proposal-based systems like Project Catalyst are now subject to the upsides and downsides of LLMs.
Our solution researches how individuals in Catalyst could employ an LLM assistant for improving proposals AND auditing to dramatically transform Catalyst resource management and effectiveness.
This is the total amount allocated to Wolfram: The AI Revolution and Implications to Project Catalyst. 5 out of 6 milestones are completed.
1/6
Literature Review & Community Surveys
Cost: ₳ 75,000
Delivery: Month 2 - Dec 2023
2/6
Content Collection and Curation
Cost: ₳ 180,000
Delivery: Month 4 - Feb 2024
3/6
LLM Evaluation
Cost: ₳ 200,000
Delivery: Month 7 - May 2024
4/6
Experimentation & Alterations
Cost: ₳ 100,000
Delivery: Month 8 - Jun 2024
5/6
Report Preparation & Writing
Cost: ₳ 48,500
Delivery: Month 9 - Jul 2024
6/6
Final Close Out Report & Presentation
Cost: ₳ 106,500
Delivery: Month 10 - Aug 2024
Steph Macurdy
Gabriela Guerra
No dependencies.
Project will be fully open source.
Introduction
Due to the advancements in large language models (LLMs), the entire world has undergone significant changes. Any governance process based on proposals is now subject to the open creation possibilities by LLMs as well as the useful review and evaluations from them.
Wolfram: AI Revolution & Project Catalyst Video Presentation
The Problem and Our Solution
At present, understanding the full Catalyst process is a significant undertaking. From problem sensing to challenge interpretation, from proposal writing to feedback sharing and ultimately submission; proposers have extensive guidelines and community standards to follow for accurate completions. Writing proposals is a significant undertaking, and thus a strong deterrent against less serious proposals. However, a global community of participants potentially struggle to comprehend, articulate and convey how their proposal will deliver impact in a highly feasible and cost effective way. And while these strict guidelines are necessary, they may simultaneously increase the difficulty in articulating solutions to real problems and ultimately decrease participation. It would be encouraging to this community if supportive and generative "railings" existed to help facilitate the process. A grassroots system of innovation may benefit from a supportive boost in the area of proposal writing.
Conversely, the field of community reviewers face a similar (yet opposite) undertaking when evaluating proposals in each new funding round. With financial incentives and tools like LLMs, what support are we providing these reviewers to combat and enforce the auditability standards? With over 1,000 submissions in each of the last three rounds, what standardized procedures can reviewers unequivocally rely upon when facing the full spectrum of community submitted proposals? Even the most veteran reviewers might bias depending on native language or recognition of "friendly" proposals. We know there's a wide variety of reasons why community reviewers need support in the functions they provide the Catalyst community. So they may support the idea of an assistant that boosts their ability to efficiently and effectively draw conclusions about impact, feasibility and cost effectiveness of each proposal in a standardized way.
So, our solution is to research what is needed to create an LLM-based assistant that acts as a guiding companion in the Catalyst process. The objective of this research is twofold: to explore the feasibility of providing users with support throughout the stages of proposal submission, evaluation, and voting, and to understand the preferred implementation approach according to the community's perspective. It's important to note that this proposal does not encompass the actual implementation of the LLM assistant.
With the support of Wolfram Research's extensive work on building LLM tools over the past six months, with the successful Wolfram plug-in to ChatGPT and more, we are well-equipped to research the requirements of a tool that addresses the proposal process's pain points. The potential of LLMs is vast, opening up numerous possibilities for assistance on all stages of the proposal process – from researching, to writing, to reviewing and, finally, to voting. Our team acknowledges both the extreme benefits and drawbacks of LLMs inside a community process based primarily on written communication. However, it would be prudent to acknowledge that LLMs are now a part of society and we can likely find a balance so that LLMs can be used effectively. In fact, this is how open systems naturally evolve and grow.
Researching LLM with Guided Participation
Our research will center around what is needed to provide step-by-step guidance throughout the Catalyst process through a chatbot type interface. Critically, this is not the creation of an AI chatbot or AI Assistant for Catalyst. That is an enormous undertaking at this point. Rather, this project is significant research and exploration to see if the useful creation of an assistant is possible—by investigating novel approaches and proofs of concept—and if so, how to keep the operational costs low enough to provide a significant return on investment for the Catalyst community. Because this project is fully open source, we believe leading this research effort would establish a strong precedent for community actors to get up to speed and build from there.
Researching the Catalyst Assistant
To build a robust and cost-effective LLM-based Catalyst Assistant, extensive research is required.
Literature Review & Community Surveys
The initial phase of the work will involve conducting thorough literature reviews and community surveys. Wolfram has already dedicated significant time to studying and engaging in discussions about the utilization of LLMs for developing expansive virtual assistants. However, it is essential to delve into detailed discussions and provide comprehensive summaries of various approaches. In addition to researching existing literature, conducting surveys among Cardano Catalyst participants is necessary to gain a deeper insight into the execution of different roles within Catalyst and understand the expectations users would have for a potential assistant.
Content Collection & Curation
There have been 10 Cardano Catalyst funds, and Wolfram Blockchain Labs (WBL) has already constructed curated collections of past proposals, including analyses for ranking proposals based on community voting and reviews in our existing dashboard solution. However, for this specific use case, we will need to further curate these proposals. Additionally, we must gather all instructional content related to participating in different roles within Catalyst. Furthermore, it is likely that we will need to develop a significant amount of new material on how to engage in Catalyst. Initial assessments of the existing material reveal that many instructional resources assume a significant amount of prior knowledge—unfortunately we can’t assume that instructional content would be available in different LLMs, nor would it be validated or trustworthy. These tasks would provide curated content and a deep understanding of the unique aspects of Catalyst proposals that would be the key inputs into LLM-based assistance in detail aspects of proposal creation, refining, and evaluation.
LLM Evaluation
Drawing from our extensive experience collaborating with various LLM providers, such as OpenAI, we are aware of the wide range of performance capabilities and inference costs associated with these models. To develop a highly efficient system, striking a delicate balance between LLM performance and inference expenses is crucial. This entails establishing a test suite and evaluating different LLMs specifically for the use case of an LLM enhanced Catalyst Assistant. Furthermore, during the LLM exploration process, we will have the opportunity to delve into other aspects of the training pipeline to create and experiment with use cases, and design and refine proofs of concept and rank them for the possible benefits they may provide to the proposal pipeline.
Final Output
The culmination of our research and development efforts will result in a detailed research report. This report will document the exploration of LLMs that meet the necessary licensing requirements and ROI, presenting the findings to the Cardano community. Insights gathered from surveys conducted within the Cardano community will also be included in the research report, Computational essays will provide a step-by-step account of the potential development process, highlighting the critical aspects of building a potential assistant. Specifications and architecture will be detailed to provide transparency and clarity on potential functionality and implementation.
Project Objectives Summary:
Overall Benefits:
Conclusion
The introduction of an LLM-based assistant for the Cardano Catalyst proposal process holds immense potential for transformation. Harnessing the capabilities of large language models, we can equip participants with seamless navigation through Cardano Catalyst, offering real-time assistance and guidance. Together, we can revolutionize the proposal process, foster innovation, and propel the Cardano Catalyst community to unprecedented achievements.
The "Catalyst System improvements" challenge asks "what research and development is required to advance the state of the art of Catalyst and allow Catalyst to serve the community's needs better?'
Our proposed solution addresses this question by researching LLMs and trying to understand through a community supported research process how it can be applied to Project Catalyst.
We believe this proposal directly addresses the Catalyst Systems Improvements because this research has a high likelihood of advancing the state of the art for the innovation platform for Cardano. LLMs are at their infancy of development and we're just beginning to see the wide-ranging effects. A decentralized governance and proposal-based innovation fund would greatly benefit from understanding the upsides and downsides of LLM assistance. This is a direction that our community will inevitably face more in the future, so adoption now may prevent catastrophic failure in the future.
The challenge goes on to detail the category of Academic research. "Clearly defines a known Catalyst-specific problem-space where the intention is to identify facts and/or clearly stated opinions that will likely assist in solving Catalyst-specific problems, or a detailed study of a Catalyst-specific subject, especially in order to discover (new) information or reach a (new) understanding."
LLMs are state of the art technology and Wolfram Research has been exploring many ways to use it.
One of the critical issues is how many text-based processes will change with the introduction of such revolutionary technologies. We believe it's important to explore how to integrate LLM services in ways that help all members of the Catalyst community participate in all roles of the Catalyst process. Participants should not be hampered, nor should the intrinsic value of their proposals be lessened by, for instance, writing in a second language or having limited experience in proposal writing. In a way, establishing the use case for individuals ensures that power is shared more democratically in a process that is based on decentralization as core tenet.
A general community agreement that this line of research is interesting and worth exploring further. Clear success would be if a this research enables other participants success using LLMs and/or implementations into Catalyst.
We have deliverables for public repository of curated Catalyst training data, a research report, breakout rooms to share findings and receive community feedback, and twitter spaces to engage in discussion.
Our capability to deliver the project with high levels of trust and accountability is grounded in several key factors:
Our track record, experienced team, firsthand expertise, and adherence to strong financial management practices is why we're well-suited to deliver the project with high levels of trust and accountability.
Milestone 1: Literature Review & Community Surveys
Milestone 2: Content Collection and Curation
Milestone 3: LLM Evaluation
Milestone 4: Experimentation & Alterations
Milestone 5: Report Writing and Presentation
Milestone 1: Literature Review & Community Surveys (2 months)
Milestone 2: Content Collection and Curation (2 months)
Milestone 3: LLM Evaluation (3 months)
Milestone 4: Experimentation & Alterations (1 month)
Milestone 5: Report Writing and Presentation (1 month)
Milestone 1: Literature Review & Community Surveys
Milestone 2: Content Collection and Curation
Milestone 3: LLM Evaluation
Milestone 4: Experimentation & Alterations
Milestone 5: Report Writing and Presentation
Milestone 1: Literature Review & Community Surveys - ₳94,600
Milestone 2: Content Collection and Curation - ₳189,600
Milestone 3: LLM Evaluation - ₳283,900
Milestone 4: Experimentation & Alterations - ₳94,600
Milestone 5: Report Writing and Presentation - ₳47,300
The research of an LLM resource for the Cardano Catalyst process could be a game-changer. By leveraging the power of large language models, we can empower participants to navigate Cardano Catalyst with ease, providing real-time answers, guidance and reviewer support. Our comprehensive approach to research, content creation, and planning ensures the research will be produced effectively. Together, we can revolutionize the Catalyst system, foster innovation, and propel the Cardano community to new heights.
NB: Monthly reporting was deprecated from January 2024 and replaced fully by the Milestones Program framework. Learn more here
Jon Woodard, CEO
Jon Woodard is the CEO at Wolfram Blockchain Labs, where Jon coordinates the decentralized projects that connect the Wolfram Technology ecosystem to different DLT ecosystems. Previously at Wolfram Research Jon worked on projects at the direction of Wolfram Research CEO Stephen Wolfram and prior to that was a member of the team who worked on the monetization strategies and execution for Wolfram|Alpha. Jon has a background in economics and computational neuroscience. He enjoys cycling in his spare time.
Jason Cawley, Wolfram Solutions
Jason Cawley has been with Wolfram Research for 22 years. His academic background is in the social sciences and analytic methods in the social sciences, including finance and economics, statistics, modeling and simulation. He worked on core Mathematica development in those areas, then worked on Wolfram|Alpha from its inception to public release, including most of its finance features. Jason was project architect and technical lead on various consulting projects for over the next decade. He has been the Director of Wolfram Solutions since 2020. Jason lives in Phoenix, AZ.
Anshu Manik, Wolfram Technical Consulting
Anshu Manik is a seasoned professional specializing in AI/ML software development, with a Ph.D. in Civil Engineering from UIUC. With over 14 years of experience at Wolfram Research, Anshu has honed their expertise in developing cutting-edge solutions using Data Science, AI/ML, process automation, and natural language processing. Anshu's innovative approach and deep knowledge of advanced technologies enable them to create impactful solutions for clients worldwide.
Onkar Singh, Project Director
Onkar Singh is a project director in Wolfram Solutions and has been with Wolfram Research for 18 years. He joined Wolfram Solutions in 2011 and has worked both as a developer and technical lead on numerous projects. He has led numerous consulting projects including custom Wolfram|Alpha deployments, and has vast experience in working closely with enterprise clients. Prior to working at Solutions, he worked in the technical support for several years and led the Technology Services Group where he helped customers with various Wolfram technologies. Onkar enjoys playing tennis in his spare time.
Johan Veerman, CTO
Johan Veerman is General Manager at Wolfram Research South America and CTO at Wolfram Blockchain Labs. Previously he has been Science Advisor at the Ministry of Foreign Affairs in Peru and Chief Scientist on two Antarctic expeditions. Johan's background is on physics and business management. He enjoys playing soccer and is a certified cave diver.
Steph Macurdy, Head of Research and Education
Steph Macurdy has a background in economics, with a focus on complex systems. He attended the Real World Risk Institute in 2019, lead by Nassim Taleb, and has been investing in the crypto asset space since 2015. He previously worked for Tesla as an energy advisor and Cambridge Associates as an investment analyst. Steph is a youth soccer coach in the Philadelphia area and is interested in permaculture.
Gabriela Guerra Galan, Project Manager
Gabriela Guerra Galan: Gabriela has 15+ years of experience leading projects. She is a certified PMP and Product Owner with bachelor's degree in Mechatronical Engineering, complemented by a master's degree in Automotive Engineering. As the co-founder of Bloinx, a startup that secured funding from the UNICEF Innovation Fund, she has demonstrated a passion for driving innovation and social impact.
Jesús Hernández, Principal Consultant
Jesús Hernández is a Principal Consultant with Wolfram Solutions. He has been with the company for ten years and is continually learning how Wolfram technology can be applied in new areas. He has a background in theoretical atomic physics. Jesús enjoys cooking, listening to almost all forms of music, and watching his favorite sports teams inevitably lose.
For additional services on this project, we will engage and collaborate with experienced members of the Wolfram Research team who have extensive expertise in LLM projects.