<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Lilypad Network - internet-scale off-chain distributed compute solution]]></title><description><![CDATA[Verifiable, truly internet-scale distributed compute network
Efficient off-chain computation for AI &amp; ML
DataDAO computing
The next frontier of web3]]></description><link>https://blog.lilypad.tech</link><generator>RSS for Node</generator><lastBuildDate>Tue, 07 Apr 2026 10:19:38 GMT</lastBuildDate><atom:link href="https://blog.lilypad.tech/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Lilypad Announces Closure After Pioneering Work in Decentralised AI Compute]]></title><description><![CDATA[Today, after 2 years of pushing the boundaries of decentralised AI, we announce the closure of Lilypad.
We founded Lilypad to reimagine how artificial intelligence is built and accessed, and enabled developers to run AI models across a distributed ne...]]></description><link>https://blog.lilypad.tech/lilypad-announces-closure</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-announces-closure</guid><category><![CDATA[lilypadnetwork]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 30 Sep 2025 11:45:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759231355281/f9199c32-286c-40f1-9e61-3d8e0b14ef24.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Today, after 2 years of pushing the boundaries of decentralised AI, we announce the closure of Lilypad.</p>
<p>We founded Lilypad to reimagine how artificial intelligence is built and accessed, and enabled developers to run AI models across a distributed network of global compute providers. Our mission was to make high-performance inference and training accessible, affordable, and auditable - all on decentralised infrastructure - and we did.</p>
<h2 id="heading-what-were-proud-of">What we’re proud of</h2>
<p>Over the past two years, Lilypad has built a powerful open-source framework, onboarded global contributors, secured partnerships with enterprise and Web3 leaders, and demonstrated real-world applications across deSci, Agents, distributed inference, AI APIs, and user-friendly UI. The platform served thousands of jobs and showcased a new model for trust-minimised AI compute.</p>
<p>We wrote cutting edge research papers (including this <a target="_blank" href="https://arxiv.org/abs/2501.05374">arvix-published verification in decentralised networks paper)</a>, collaborated with renowned researchers (<a target="_blank" href="https://youtu.be/5Hq3lUobrN4?si=kUJjHNyjrUp9Xrs_">such as Cedars-Sinai Oncology Researcher Vivek Pujara</a>), unleashed new scientific discoveries (including a new heart medication in collaboration with Dr. Michael Levin and Amelie Schroder thats now in wet lab trials), launched cutting-edge AI Agents and teamed up with some incedible projects across the space including Morpheus, Hive, Baselight.ai, Recall, Chiper.ai.</p>
<p>We’re also proud of our committed dev community and their support over the past 2 years.</p>
<ul>
<li><p>Building a world-class team</p>
</li>
<li><p>Being one of the first deAI projects in the space (Augment hackathon)</p>
</li>
<li><p>Pushing the boundaries and having an ambitious vision</p>
</li>
<li><p>Partnerships across the space with top-tier projects</p>
</li>
<li><p>Top-tier research &amp; research partnerships support.</p>
</li>
<li><p>Our committed dev-first community</p>
</li>
</ul>
<h2 id="heading-why-not-just">Why not just…</h2>
<p>We know the market is on the rise and conditions are incredibly favourable to deAI projects currently. We want to assure interested parties that we have considered all the possibilities for continuation. This was not an easy decision to come to, and disappointing our committed community, supporters, partners, and stakeholders was not an announcement we ever wanted to make.</p>
<p>We put our blood, sweat and tears into building something we believed in; we were not a memecoin, a rugpull or a fake deal. We built real technology and provable infrastructure that pushed the deAI space forward.</p>
<p>However, 90% of startups fail, and unfortunately, alongside personal founder commitments, we weren't able to secure the support needed to continue to beat the odds.</p>
<p>We still believe wholeheartedly in the decentralised AI future and the vision of Lilypad continues to show that it is relevant and necessary - perhaps even more so than when we first set out to build it.</p>
<p>That’s why we are open-sourcing all of the code and guides to running it on <a target="_blank" href="https://github.com/Lilypad-Tech">the Github</a>.</p>
<h2 id="heading-open-sourcing-lilypad">Open Sourcing Lilypad</h2>
<p>While Lilypad will cease operations, the team is committed to ensuring that open-source components remain available to the community where possible. All support and further updates will be posted to <a target="_blank" href="https://github.com/Lilypad-Tech">Github</a>.</p>
<p>We will continue to host the infrastructure while we can, alongside our personal GPUs to ensure continuity for our users and partners.</p>
<p>We’d love to see others continue the mission of decentralised AI, or even see a LilypadDAO form.</p>
<p>If you have ideas for the future of Lilypad, feel free to get in touch.</p>
<h2 id="heading-closing">Closing</h2>
<p>We set out to prove that open, verifiable compute could change how AI is built - and we showed that it can. From demos to research breakthroughs, enterprise collaborations to community-driven innovation, Lilypad wouldn’t have been possible without you.</p>
<p>Though Lilypad as a company is ending, the mission of decentralised AI is far bigger than us. We remain convinced that decentralised infrastructure, coupled with the primitives of crypto, offers a lasting competitive advantage - for businesses, developers, and end-users alike (<a target="_blank" href="https://blog.lilypad.tech/why-decentralized-ai-deai-will-win">We blogged about this here</a>).</p>
<p>We are proud of what we built. We are grateful to everyone who joined us in believing in this vision. And we can’t wait to see where the community takes it next.</p>
<p><strong>Thank you.</strong></p>
<p>The Lilypad Team</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759231428802/ae2eb0c9-bae4-42bd-96fa-152555850605.png" alt class="image--center mx-auto" /></p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Facial DNA Network (FDN)]]></title><description><![CDATA[Lilypad is thrilled to welcome our first advanced custom creative model provider to the network in a new collaboration with Facial DNA Network (FDN). This partnership brings together decentralised AI infrastructure and purpose-built facial inference ...]]></description><link>https://blog.lilypad.tech/lilypad-x-fdn</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-fdn</guid><category><![CDATA[AI]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[provenance]]></category><category><![CDATA[Lilypad Anura API]]></category><category><![CDATA[lilypadnetwork]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 24 Jun 2025 07:00:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1750430887125/c75f4e50-aa3e-4484-8886-ccddcdafa4d0.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is thrilled to welcome our first advanced custom creative model provider to the network in a new collaboration with <strong>Facial DNA Network (FDN)</strong>. This partnership brings together decentralised AI infrastructure and purpose-built facial inference models and data incentives, opening a powerful new chapter in user-owned AI.</p>
<h2 id="heading-meet-facial-dna-network-fdn"><strong>Meet</strong> Facial DNA Network (<strong>FDN)</strong></h2>
<p>FDN is building token-incentivised facial recognition tools that let users contribute images, refine biometric models, and earn for their participation. Their approach combines privacy-aware ML pipelines with reward systems that align incentives across communities, developers, and model creators.</p>
<h2 id="heading-aligned-vision"><strong>Aligned Vision</strong></h2>
<p>Both Lilypad and FDN believe in building systems where AI is accessible, transparent, and driven by the people who use it. This partnership is grounded in mutual respect for open ecosystems, user ownership, and scalable infrastructure providing a virtuous cycle and fair economics for contributors.</p>
<h2 id="heading-synergistic-strengths"><strong>Synergistic Strengths</strong></h2>
<p><strong>Lilypad provides:</strong></p>
<ul>
<li><p>Monetisable Open-Access Model Marketplace</p>
</li>
<li><p>Flywheel for sustainable data incentives -&gt; model training -&gt; model end use</p>
</li>
<li><p>Decentralised GPU network capable of running containerised models</p>
</li>
<li><p>Open API layer with verifiable, pay-per-job execution</p>
</li>
<li><p>Built-in provenance and proof-of-compute</p>
</li>
</ul>
<p><strong>FDN contributes:</strong></p>
<ul>
<li><p>High-signal facial matching models trained on consented, user-contributed data</p>
</li>
<li><p>Data incentives for data collection &amp; clear data provenance audits</p>
</li>
<li><p>A user-friendly interface with filters and remixable outputs</p>
</li>
<li><p>A tokenised economy ($FDN) designed to support growth, access, and contribution</p>
</li>
</ul>
<p><strong>Together this enables:</strong></p>
<ul>
<li><p>A system where users can run facial inference jobs, choose to share their data, and earn $FDN -&gt; creating a virtuous loop and audit trail for model provenance &amp; economic sustainability</p>
</li>
<li><p>A platform for launching creative, co-branded campaigns that amplify both utility and reach</p>
</li>
<li><p>Live containerised deployment of FDN’s model on Lilypad</p>
</li>
</ul>
<h2 id="heading-why-this-partnership-matters"><strong>Why This Partnership Matters</strong></h2>
<h3 id="heading-a-real-incentive-flywheel"><strong>A Real Incentive Flywheel</strong></h3>
<p>FDN’s facial inference model is now live on Lilypad. Users can:</p>
<ul>
<li><p>Run inference</p>
</li>
<li><p>Opt in to share their image</p>
</li>
<li><p>Improve the model</p>
</li>
<li><p>Earn $FDN for participating</p>
</li>
</ul>
<p>Each run fuels the system. Data trains better models, better models attract more users, and more users keep the loop spinning.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750429964656/f3abce38-c82c-4cfb-8f31-174d24df4575.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-specialised-models-monetised-natively"><strong>Specialised Models, Monetised Natively</strong></h3>
<p>FDN’s models are designed for identity, avatar generation, and video outputs. They are containerised, callable, and monetised on a per-job basis using Lilypad’s infrastructure. This gives model creators a scalable and direct way to publish and earn.</p>
<h3 id="heading-transparent-provenance"><strong>Transparent Provenance</strong></h3>
<p>All Lilypad jobs come with built-in proof. Developers and users can verify what was run, when, and on which hardware. This allows for auditability, reproducibility, and trust across the pipeline.</p>
<h3 id="heading-clear-benefits-for-every-audience"><strong>Clear Benefits for Every Audience</strong></h3>
<p><strong>Developers</strong> get plug-and-play access to advanced identity models.<br /><strong>Users</strong> can contribute, earn, and play with custom outputs.<br /><strong>Ecosystem builders</strong> get a composable layer for building next-gen agents, identity flows, or user-facing experiences.</p>
<h2 id="heading-technical-approach">Technical Approach</h2>
<p>This collaboration focuses on containerising FDN’s models in a reproducible, job-based format. These models run through Lilypad’s decentralised compute network, with each execution signed by the hardware operator and verified through Lilypad’s proof system. This provides on-chain compatibility, audit trails, and structured attribution.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1750430808464/9b3bade1-56dd-4bb7-8cc6-cd0b68cc3f7f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-use-case-spotlight">Use Case Spotlight</h2>
<p>This integration unlocks new biometric capabilities for builders and creatives alike. Imagine:</p>
<ul>
<li><p>A decentralised identity onboarding flow that uses FDN’s face match tech for secure verification</p>
</li>
<li><p>Earning tokens for contributing your image to improve a model, tracked transparently through Lilypad’s job logs</p>
</li>
<li><p>Lilypad’s first advanced creative model provider joining the network</p>
</li>
</ul>
<p>These aren’t blue-sky ideas. These are now within reach.</p>
<h2 id="heading-stay-tuned-for-our-exciting-collab"><strong>Stay Tuned for our exciting collab!</strong></h2>
<p>We’re already collaborating on a follow-up project designed to push this even further - bringing creative outputs, on-chain rewards, and social integrations into one launch. We can’t tell you about it yet…. but you’re going to have fun with it!  </p>
<p>Stay tuned for more ;)</p>
<h2 id="heading-get-involved"><strong>Get Involved</strong></h2>
<p>Follow, test, and build with us:</p>
<ul>
<li><p><a target="_blank" href="https://x.com/Lilypad_Tech">@Lilypad_Tech</a></p>
</li>
<li><p><a target="_blank" href="https://x.com/Project_FDN">@Project_FDN</a></p>
</li>
<li><p><a target="_blank" href="https://facialdna.ai/">facialdna.ai</a></p>
</li>
<li><p><a target="_blank" href="https://lilypad.tech/">lilypad.tech  
  </a></p>
</li>
</ul>
<p>The future of biometric AI is decentralised, permissionless, and user-powered. Let’s build it together!</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Olas]]></title><description><![CDATA[Lilypad Network, a full stack modular AI services platform, today announced its acceptance into the prestigious Olas Agent Accelerator program. As part of this strategic partnership, Lilypad will integrate its leading research agent https://docs.lily...]]></description><link>https://blog.lilypad.tech/lilypad-x-olas</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-olas</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 03 Jun 2025 14:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749517803282/9ade5970-e07b-484f-8c79-d146c44898c2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad Network, a full stack modular AI services platform, today announced its acceptance into the prestigious Olas Agent Accelerator program. As part of this strategic partnership, Lilypad will integrate its leading research agent https://docs.lilypad.tech/lilypad/use-cases-agents-and-projects/agents/ai-oncologist-agent into Pearl, Olas's groundbreaking AI agent marketplace, marking a significant milestone in democratizing access to advanced AI research capabilities.</p>
<h2 id="heading-revolutionizing-ai-research-through-decentralized-infrastructure">Revolutionizing AI Research Through Decentralized Infrastructure</h2>
<p>Lilypad Network has established itself as a pioneer in serverless, distributed compute networks that enable internet-scale data processing for AI, ML, and other computation-intensive applications. The Olas Accelerator program, which awards $1M in grants and OLAS Dev Rewards to developers building AI agents for Pearl, represents the perfect synergy for Lilypad's mission to democratize AI infrastructure.</p>
<p>"This partnership represents a fundamental shift in how researchers and developers access computational resources from the Lilypad Network. Developers can simply download the Pearl client and use the Research agent with ease, no need for an API key or other complex configurations" said Alison Haire, CEO and Founder of Lilypad Network. "Our mission is to equip developers with the necessary tools they need to build an open web and be a critical link in a collaborative, decentralized AI platform that anyone can contribute to and use."</p>
<h2 id="heading-introducing-the-lilypad-research-agent">Introducing the Lilypad Research Agent</h2>
<p>The Lilypad Research Agent leverages the network's distributed compute infrastructure to provide users with unprecedented access to AI-powered research capabilities. Key features include:</p>
<ul>
<li><p><strong>Cost-Effective Research at scale</strong>: Eliminates the need for expensive analyst team used to read through large numbers of research papers. Professional from the Biomedical fiend o marketing can use the agent to scale their work, build a critical knowledge base, and produce actionable reports</p>
</li>
<li><p><strong>Decentralized Compute Access</strong>: Harnesses Lilypad's global network of on-demand GPU nodes for computationally intensive research tasks. Compute is verified on-chain using Arbitrum (currently on testnet)</p>
</li>
<li><p><strong>Open Source AI</strong>: Leverage leading models from the open source AI community via the Lilypad network. The research assistant will start using Llama3.3 70b (Quantized)</p>
</li>
<li><p><strong>Verifiable Results</strong>: Utilizes Lilypad's verifiable, trustless computational network to ensure research integrity and reproducibility</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749517902507/9f871bb1-6890-4445-b16b-a78f84c9396b.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-strategic-integration-with-pearl-marketplace">Strategic Integration with Pearl Marketplace</h2>
<p>Pearl, the world's first "agent app store," allows users to fully own and customize their AI agents while running them autonomously. The integration of Lilypad's Research Agent will enable Pearl users to:</p>
<ul>
<li><p><strong>Scale research efforts while running an agent locally</strong>: Pearl is simple to get started and runs locally on the users machine. With the Lilypad research agent, users can run a local agent while accessing powerful AI models and computing resources for scaling inference needs</p>
</li>
<li><p><strong>Conduct Advanced Research</strong>: Access powerful AI models and computing resources for scientific research, data analysis, and academic studies</p>
</li>
<li><p><strong>Maintain Ownership</strong>: Retain full control and ownership of research data and results through Pearl's decentralized architecture</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1749517923526/b69a20ec-31f2-41eb-908e-7a5a5674149e.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-accelerating-open-science-and-ai-development">Accelerating Open Science and AI Development</h2>
<p>Lilypad's vision extends to "swarms of agents coordinating scientific research" and building decentralized AI systems across a permissionless network. This partnership with Olas represents a crucial step toward that future, where:</p>
<ul>
<li><p>Research institutions can access enterprise-grade compute without the enterprise costs</p>
</li>
<li><p>Independent researchers gain access to the same computational resources as major tech companies</p>
</li>
<li><p>Open AI and ML development flourishes through transparent, efficient, and accessible computational ecosystems</p>
</li>
<li><p>Academic collaboration transcends geographical and institutional boundaries</p>
</li>
</ul>
<h2 id="heading-looking-forward-the-future-of-decentralized-ai-research">Looking Forward: The Future of Decentralized AI Research</h2>
<p>This partnership positions both organizations at the forefront of the decentralized AI revolution. As Lilypad prepares for its mainnet launch in 2025 and Olas expands the Pearl ecosystem, users can expect:</p>
<ul>
<li><p>Enhanced research capabilities through improved AI model access</p>
</li>
<li><p>Lower barriers to entry for computational research</p>
</li>
<li><p>Increased collaboration between the decentralized computing and AI agent communities</p>
</li>
<li><p>New revenue streams for both compute providers and research contributors</p>
</li>
</ul>
<h2 id="heading-about-lilypad-network">About Lilypad Network</h2>
<p>Lilypad is powering democratic participation in the AI Innovation Economy by pioneering avenues for AI Scientists to deploy, distribute and monetise models. As a full stack modular AI services platform, Lilypad provides a model marketplace, MLops tooling, and a distributed, on-demand compute network for scaling AI inference for ML pipelines, agent workflows and desci applications.</p>
<p>The Lilypad Network provides a verifiable, serverless decentralized compute network that offers global, permissionless access to compute power. The network orchestrates off-chain compute through a global GPU marketplace and uses on-chain verification to guarantee compute success.</p>
<h2 id="heading-about-olas">About Olas</h2>
<p>Olas enables everyone to own a share of AI, specifically autonomous agent economies. Founded in 2021, Olas offers the composable Olas Stack for developing autonomous AI agents and the Olas Protocol for incentivizing their creation and co-ownership. Pearl, Olas's flagship application, streamlines entry into the world of autonomous AI agents, enabling users to participate without special skills, advanced hardware, or previous experience.</p>
<h2 id="heading-keep-up-to-date">Keep Up to Date!</h2>
<p><a target="_blank" href="https://linktr.ee/LilypadNetwork">https://linktr.ee/LilypadNetwork</a></p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Recall]]></title><description><![CDATA[Lilypad is proud to announce a groundbreaking partnership with Recall - a collaboration that helps to build the next layer of the decentralized intelligence stack.
This partnership is deeply rooted in history. We first worked with the Recall team bac...]]></description><link>https://blog.lilypad.tech/lilypad-x-recall</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-recall</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Web3]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 27 May 2025 13:00:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1748603409251/bc86e1be-e085-41ba-a306-ed8c4572c96f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is proud to announce a groundbreaking partnership with <a target="_blank" href="https://recall.network/"><strong>Recall</strong></a> - a collaboration that helps to build the next layer of the decentralized intelligence stack.</p>
<p>This partnership is deeply rooted in history. We first worked with the Recall team back when they were <a target="_blank" href="https://textile.io/"><strong>Textile.io</strong></a>, showcasing the potential of decentralized data and compute through early demos with their <strong>Basin</strong> network. </p>
<p>Today, as Textile has evolved into Recall, with a bold mission to accelerate agentic intelligence through crowdsourced AI competitions, Lilypad is proud to support builders in the Recall ecosystem as they compete in performance based challenges! Check out the latest agent competition on Recall “ETH vs SOL” trading competition to catch the latest action! https://x.com/recallnet/status/1923393004259115417</p>
<h2 id="heading-shared-vision-democratizing-ai-using-verifiable-systems-and-p2p-tech"><strong>Shared Vision: Democratizing AI using verifiable systems and p2p tech</strong></h2>
<p>Recall is building a foundational layer for AI agents to verifiably track output quality and performance over time! Immutable agent performance logs have proven to be a source of information when choosing to use agents for multi-agent workflows. Lilypad helps to power agent workflows via a decentralized, permissionless compute infrastructure for verifiable AI workflows and inference. Easily on-board new AI models to the network allowing any user of the Lilypad API and CLI low cost access to the model! Compute workloads are verified on the Ethereum blockchain via Arbitrum.</p>
<p>Together, we are:</p>
<ul>
<li><p>Building verifiable systems to ensure actions taken by AI agents have a censorship-resistant record of output performance and inference completed</p>
</li>
<li><p>Powering a future where AI agents can transact, collaborate, and evolve openly across modular decentralized services</p>
</li>
<li><p>Providing new tool sets to simply deploy and monitor verifiable AI agents</p>
</li>
</ul>
<h2 id="heading-details-of-partnership-synergistic-strengths"><strong>Details of Partnership - Synergistic Strengths</strong></h2>
<p><strong>Lilypad's capabilities:</strong></p>
<ul>
<li><p>Verifiable, on-demand compute network using decentralised edge GPU nodes optimized for AI workloads</p>
</li>
<li><p>AI model marketplace to easily access and deploy </p>
</li>
<li><p>Blockchain-backed infrastructure providing verifiability, payment rails, and provenance</p>
</li>
</ul>
<p><strong>Recall's expertise:</strong></p>
<ul>
<li><p>Crowdsourced skill competitions and benchmarks</p>
</li>
<li><p>Performance-based reputation, rankings, and discovery</p>
</li>
<li><p>Transparent, auditable chain-of-thought and reasoning</p>
</li>
<li><p>A credibly neutral foundation for the agentic economy</p>
</li>
</ul>
<p>Together, we are combining technology, infrastructure, community, and tooling to deliver a powerful and modular decentralized AI ecosystem.</p>
<h2 id="heading-practical-synergies-short-term"><strong>Practical Synergies (Short-Term)</strong></h2>
<ul>
<li><p>Builders can now deploy AI models on Lilypad and track agent performance on tasks like crypto trading on Recall</p>
</li>
<li><p>Collaborate on agent competitions including the upcoming ETH vs SOL agent trading competition (<a target="_blank" href="https://x.com/recallnet/status/1924818635425538405">https://x.com/recallnet/status/1924818635425538405</a>) as well as future events such as research or marketing agent competitions</p>
</li>
</ul>
<p>Leveraging advances in p2p networking and data verifiability, this partnership helps developers build agents with provable mechanisms to improve agent performance. Immediate benefits of this partnership include:</p>
<ul>
<li><p>Verifiable, decentralized RAG (Retrieval-Augmented Generation) agent pipelines</p>
</li>
<li><p>Skills benchmarking based on transparent compute and storage proofs</p>
</li>
<li><p>Censorship-resistant knowledge publishing for agentic systems</p>
</li>
</ul>
<h2 id="heading-strategic-value-and-potential-impact"><strong>Strategic Value and Potential Impact</strong></h2>
<p><strong>Why this is strategic:</strong></p>
<ul>
<li><p>Unlocks new agentic AI use cases: decentralized knowledge markets, autonomous skill upgrades, and persistent agent memory</p>
</li>
<li><p>Empowers developers and researchers to access modular, composable infrastructure without centralized bottlenecks</p>
</li>
<li><p>Strengthens Lilypad’s ecosystem positioning as the compute backbone of decentralized intelligence</p>
</li>
</ul>
<p><strong>Benefits for the Web3 and OSS ecosystem:</strong></p>
<ul>
<li><p>Creation of composable AI infrastructure primitives</p>
</li>
<li><p>Greater credibility and adoption of decentralized AI stacks</p>
</li>
<li><p>New standards for verifiable agentic data, compute, and skill certification</p>
</li>
</ul>
<p><strong>Benefits for developers and users:</strong></p>
<ul>
<li><p>Easier integration between AI computation and decentralized storage</p>
</li>
<li><p>Access to ready-made pipelines for deploying verifiable agents</p>
</li>
<li><p>Performance gains, faster development cycles, and cost savings through decentralized architectures</p>
</li>
</ul>
<h2 id="heading-long-term-vision-global-ecosystem-thesis"><strong>Long-Term Vision - Global Ecosystem Thesis</strong></h2>
<p>This partnership reflects a broader shift toward open, decentralized AI infrastructure. Together, Lilypad and Recall are:</p>
<ul>
<li><p>Championing modular ecosystems where AI agents, compute, and storage interact transparently</p>
</li>
<li><p>Building infrastructure that is owned by its users and communities, not monopolized by platforms</p>
</li>
<li><p>Setting a new precedent for ecosystem-aligned alliances that grow stronger through collaboration</p>
</li>
</ul>
<p>The future of AI must be permissionless, composable, and community-driven. Lilypad and Recall are working to ensure it is.</p>
<h2 id="heading-future-outlook-looking-ahead"><strong>Future Outlook - Looking Ahead</strong></h2>
<p>Immediate next steps:</p>
<ul>
<li><p>Continued technical integration between Lilypad’s compute marketplace and Recall’s AI competition platform</p>
</li>
<li><p>Launch of developer resources, co-marketing initiatives, and community onboarding programs</p>
</li>
<li><p>Joint hackathons, grants, and pilot project showcases across DeSci, DePIN, and Agentic AI sectors</p>
</li>
</ul>
<p>Vision for the long term:</p>
<ul>
<li>A thriving decentralized intelligence economy where builders, agents, researchers, and users access a seamless, trustless infrastructure backbone</li>
</ul>
<p>📍 Explore Recall: <a target="_blank" href="https://recall.network/">recall.network</a>📍 Build with Lilypad: <a target="_blank" href="https://lilypad.tech">lilypad.tech</a></p>
<p>Join us in building a future where intelligence is open, composable, and driven by the communities that create it.</p>
]]></content:encoded></item><item><title><![CDATA[The Decentralized AI Landscape]]></title><description><![CDATA[As centralized AI becomes increasingly monopolized, the emerging decentralized AI (deAI) landscape offers a radically open alternative. Enabled by blockchain, decentralized compute, and a growing demand for transparency, this space is coalescing into...]]></description><link>https://blog.lilypad.tech/the-decentralized-ai-landscape</link><guid isPermaLink="true">https://blog.lilypad.tech/the-decentralized-ai-landscape</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[lilypadnetwork]]></category><category><![CDATA[lilypad]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 20 May 2025 07:00:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747142284007/e01fe989-df3c-49bf-97b9-b60eaa1d39cf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As centralized AI becomes increasingly monopolized, the emerging decentralized AI (deAI) landscape offers a radically open alternative. Enabled by blockchain, decentralized compute, and a growing demand for transparency, this space is coalescing into a multi-layered ecosystem of infrastructure, intelligence, and incentives. In this post, we break down the decentralized AI ecosystem into clearly defined categories, explore the real-world utility of key players, explain how these parts interoperate, and articulate Lilypad’s role as a coordination layer for this emerging stack.</p>
<h3 id="heading-1-physical-infrastructure-the-gpu-backbone"><strong>1. Physical Infrastructure: The GPU Backbone</strong></h3>
<p>These platforms form the compute base layer by renting out GPU resources from individuals or data centers, often using tokenized incentives.</p>
<ul>
<li><p><strong>Akash</strong>: Offers a permissionless marketplace for general-purpose compute. Ideal for lightweight AI tasks or persistent inference endpoints.</p>
</li>
<li><p><a target="_blank" href="http://io.net"><strong>io.net</strong></a>: Aggregates idle enterprise-grade GPU resources. Supports heavy ML workloads like image generation and video processing.</p>
</li>
<li><p><strong>Aethir</strong>: Designed for gaming and real-time compute but increasingly focused on AI workloads.</p>
</li>
<li><p><strong>Hyperbolic</strong>: AI-specific DePIN network with focus on inference services and fine-tuning.</p>
</li>
<li><p><a target="_blank" href="http://Vast.ai"><strong>Vast.ai</strong></a>: A fiat-based GPU marketplace with thousands of available rigs. Often used for Stable Diffusion and training.</p>
</li>
<li><p><strong>Golem</strong>: One of the earliest P2P compute networks; suitable for basic AI jobs and distributed simulations.</p>
</li>
<li><p><strong>Exabits, Spheron, Impossible Cloud</strong>: Differentiated on availability, performance pricing, and decentralized guarantees.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: These GPU networks often plug into protocols like Lilypad, Gensyn, or Ritual to serve on-demand inference or training jobs. Lilypad coordinates job dispatch and payment, abstracting compute across providers.</p>
<h3 id="heading-2-decentralized-cloud-vms-stateless-infrastructure-for-ai-pipelines"><strong>2. Decentralized Cloud VMs: Stateless Infrastructure for AI Pipelines</strong></h3>
<p>This layer mimics services like AWS Lambda or Docker containers—but decentralized.</p>
<ul>
<li><p><strong>Fluence</strong>: Offers WASM containers for stateless execution. Ideal for agent coordination, ephemeral inference, and middleware.</p>
</li>
<li><p><a target="_blank" href="http://Aleph.im"><strong>Aleph.im</strong></a>: Focused on decentralized indexing and serverless hosting—useful for storing and calling AI model metadata.</p>
</li>
<li><p><strong>Swan Chain, Cartesi</strong>: Run secure off-chain compute, Cartesi through Linux-based VMs. Suitable for off-chain model evaluation or RL environments.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: These platforms can host parts of AI workflows—such as metadata indexing or post-inference validation—and invoke Lilypad or OpenGradient for actual model execution.</p>
<h3 id="heading-3-ai-agents-and-frameworks"><strong>3. AI Agents and Frameworks</strong></h3>
<p>The emergent UX of deAI is agentic: models that act autonomously, coordinate resources, and reason.</p>
<ul>
<li><p><strong>Eliza, Morpheus, Virtuals</strong>: AI agents that autonomously run jobs, maintain memory, and interact with other agents or protocols.</p>
</li>
<li><p><strong>Naptha, Olas, Gaia, Theoriq, Recall</strong>: Frameworks for building and running these agents. Olas is the most robust, offering incentives, gossip coordination, and scheduling.</p>
</li>
<li><p><strong>Nevermined</strong>: Payments middleware for agents to settle costs autonomously.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Agents built on these frameworks submit jobs to inference platforms like Lilypad or Gaia, store state on IPFS or Arweave, and pay using smart contracts on chains like Optimism or Ethereum.</p>
<h3 id="heading-4-data-storage-and-databases"><strong>4. Data, Storage, and Databases</strong></h3>
<p>AI’s raw material is data. These platforms ensure it remains verifiable, accessible, and censorship-resistant.</p>
<ul>
<li><p><strong>Data networks &amp; datasets</strong>: Baselight.ai (structured datasets), Vana (user-owned data), Grass (web crawling), Openmesh (data commons).</p>
</li>
<li><p><strong>Storage</strong>: Filecoin (deep storage), Arweave (permaweb), IPFS (general-purpose distributed storage).</p>
</li>
<li><p><strong>Databases</strong>: Fireproof.storage, Space &amp; Time (verifiable compute over indexed data).</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad jobs often consume data from Vana or Openmesh, read/write artifacts to IPFS/Filecoin, and can validate lineage via Space &amp; Time or Story Protocol.</p>
<h3 id="heading-5-middleware-amp-service-layer"><strong>5. Middleware &amp; Service Layer</strong></h3>
<p>This category handles routing, orchestration, and economic logic in a composable deAI stack.</p>
<ul>
<li><p><strong>SingularityNET</strong>: Offers a token-gated AI service registry. Hosts models and agents that can call each other.</p>
</li>
<li><p><strong>OpenGradient</strong>: Coordinating distributed training jobs, with tokenized rewards.</p>
</li>
<li><p><strong>Ritual</strong>: Programmable compute infrastructure for LLMs, often integrated with <a target="_blank" href="http://io.net">io.net</a>.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: These tools often wrap and route calls to infrastructure like Lilypad or Hyperbolic. For instance, Ritual jobs may be executed on Lilypad’s protocol but initiated through Ritual’s SDK.</p>
<h3 id="heading-6-ai-service-chains"><strong>6. AI Service Chains</strong></h3>
<p>Layer 1 or 2 chains optimized for AI use—either through VM design or native tokenomics.</p>
<ul>
<li><p><strong>0g</strong>: A modular AI operating system focused on scalable storage, data availability, and GPU scheduling. Ideal for high-throughput agent workflows.</p>
</li>
<li><p><strong>Near</strong>: Home to deAI projects and models. Supports contract-level model inference.</p>
</li>
<li><p><strong>OG Labs, Sahara AI, IoTeX</strong>: Building app-specific chains that integrate AI directly into their execution layers.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad can dispatch workloads or validate results using smart contracts on these chains. These chains also host front-end dApps that submit jobs.</p>
<h3 id="heading-7-distributed-training-platforms"><strong>7. Distributed Training Platforms</strong></h3>
<p>Instead of centralized clusters, these platforms coordinate training jobs across distributed nodes.</p>
<ul>
<li><p><strong>Gensyn</strong>: The gold standard in this space. Coordinated LLM training with incentives for participants.</p>
</li>
<li><p><strong>Prime Intellect, Nous</strong>: Research-focused alternatives enabling RLHF, fine-tuning and collaborative model development.  </p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad can route fine-tuning tasks to Gensyn or enable collaborative training pipelines across multiple datasets pulled from Vana or Filecoin.</p>
<h3 id="heading-8-inference-platforms"><strong>8. Inference Platforms</strong></h3>
<p>These specialize in high-throughput, pay-per-use model serving.</p>
<ul>
<li><p><strong>Hyperbolic</strong>: GPU inference network focusing on low-latency model runs.</p>
</li>
<li><p><strong>Gaia</strong>: Focused on LLMs and foundational inference for agents.</p>
</li>
<li><p><strong>Bittensor Subnets</strong>: Specialized for specific AI tasks, like vision or translation.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad is interoperable with Gaia, and in the future could connect to Bittensor subnets via bridging wrappers. Lilypad also offers its own marketplace with inference APIs.</p>
<h3 id="heading-9-model-hosting-marketplaces"><strong>9. Model Hosting Marketplaces</strong></h3>
<p>Places where models are deployed, discovered, and monetized.</p>
<ul>
<li><p><strong>Bagel, Prime Intellect,</strong> <a target="_blank" href="http://Flock.io"><strong>Flock.io</strong></a>: Offer permissionless upload, API-based access, and in some cases, provenance.</p>
</li>
<li><p><strong>Bittensor Subnets</strong>: Some act as model hosts, rewarded via stake.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Models on Lilypad can be cross-posted to Bagel or exposed via Flock APIs. Model metadata can be stored on IPFS and referenced on Arweave.</p>
<h3 id="heading-10-privacy-amp-security-layers"><strong>10. Privacy &amp; Security Layers</strong></h3>
<p>Key to enabling AI that respects ownership, user data, and safe execution.</p>
<ul>
<li><p><strong>TEE</strong>: Phala, Nillion (confidential execution of inference jobs)</p>
</li>
<li><p><strong>ZK Proofs</strong>: Nexus (privacy-preserving inference verification)</p>
</li>
<li><p><strong>FHE</strong>: Zama, Gateway (encrypted model execution)</p>
</li>
<li><p><strong>Other Privacy</strong>: Lit Protocol (access control), Story Protocol (IP + provenance)</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad can integrate ZK or TEE as plugins for sensitive jobs—e.g., confidential medical inference.</p>
<h3 id="heading-11-reinforcement-learning-protocols"><strong>11. Reinforcement Learning Protocols</strong></h3>
<p>Still early, but these explore incentive-aligned training via RL.</p>
<ul>
<li><p><strong>Newcoin</strong>: RL-based learning from social graph behavior.</p>
</li>
<li><p><strong>Cambrian Network</strong>: RL for distributed learning in autonomous agents.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad jobs can be used as reward signals or task environments within these RL frameworks.</p>
<h3 id="heading-12-ip-amp-provenance-tooling"><strong>12. IP &amp; Provenance Tooling</strong></h3>
<p>Tracks who built what, how it’s used, and where value flows.</p>
<ul>
<li><p><strong>Story Protocol</strong>: Royalty-enabled provenance across derivative works.</p>
</li>
<li><p><strong>EQTY Lab</strong>: Licensing and creator attribution.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad’s token rails and job graphs can integrate Story Protocol for remix royalties and IP lineage.</p>
<h3 id="heading-13-defai-ai-defi-intersections"><strong>13. DeFAI: AI + DeFi Intersections</strong></h3>
<p>Finance rails designed specifically for AI workflows and autonomous agents.</p>
<ul>
<li><strong>Glif, Parasail</strong>: DeFi rails for model monetization, agent staking, or inference loans.</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad can use Glif as a settlement layer, or Parasail to underwrite compute requests.</p>
<h3 id="heading-14-ai-research-amp-commons"><strong>14. AI Research &amp; Commons</strong></h3>
<p>Think tanks, foundations, and data commons ensuring open AI development.</p>
<ul>
<li><p><strong>Foresight Institute</strong>: Research funding for AGI safety and open science.</p>
</li>
<li><p><strong>RMIT Blockchain Innovation Hub, dbForest, CEL</strong>: Focused on building frameworks and commons for deAI.</p>
</li>
</ul>
<p><strong>Interoperability</strong>: Lilypad can support their agents, training models, or infrastructure. These orgs may also help shape governance.</p>
<h3 id="heading-where-lilypad-fits"><strong>Where Lilypad Fits</strong></h3>
<p>Lilypad is the decentralized execution and economic coordination layer for the AI ecosystem.</p>
<ul>
<li><p><strong>For Model Creators</strong>: A frictionless way to deploy and monetize models</p>
</li>
<li><p><strong>For Compute Providers</strong>: Monetize idle GPUs via permissionless job participation</p>
</li>
<li><p><strong>For Developers</strong>: Plug into an on-chain model marketplace with API-based access</p>
</li>
</ul>
<p><strong>Key Differentiators:</strong></p>
<ul>
<li><p>On-chain job routing, execution, escrow, and rewards</p>
</li>
<li><p>Composability across data, models, compute, and agentic frameworks</p>
</li>
<li><p>Modular and chain-agnostic, with EVM-based smart contracts</p>
</li>
</ul>
<p>Lilypad acts as the “glue” of decentralized AI—connecting demand and supply across categories through a standardized protocol and token incentive layer.</p>
<h3 id="heading-final-insight-deai-as-a-parallel-chain"><strong>Final Insight: deAI as a Parallel Chain</strong></h3>
<p>Decentralized AI is bigger than a category: it’s an ecosystem. Like DeFi redefined finance, deAI redefines intelligence, coordination, and value creation.</p>
<p>Lilypad’s role isn’t to compete with these players, but to make them usable, valuable, and coordinated.</p>
<p><strong>The future is not closed AI APIs. The future is programmable, composable, community-owned intelligence. And Lilypad is building it.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x PoscidonDAO]]></title><description><![CDATA[Lilypad is proud to announce a major partnership with PoscidonDAO, a visionary force in decentralized science (DeSci) building infrastructure and incentives for planetary-scale, open research.
Together, we are unlocking a new paradigm for community-a...]]></description><link>https://blog.lilypad.tech/lilypad-x-poscidondao</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-poscidondao</guid><category><![CDATA[Science ]]></category><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[desci]]></category><category><![CDATA[lilypad]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Wed, 14 May 2025 11:00:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747097110062/358b834f-1c3b-49b9-a200-f5730488eba2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is proud to announce a major partnership with PoscidonDAO, a visionary force in decentralized science (DeSci) building infrastructure and incentives for planetary-scale, open research.</p>
<p>Together, we are unlocking a new paradigm for community-aligned scientific discovery and decentralized, verifiable AI.</p>
<p>This isn’t just another partnership: it’s the convergence of two pioneering ecosystems solving real-world challenges through compute, coordination, and credibility.</p>
<hr />
<h2 id="heading-details-of-partnership-synergistic-strengths"><strong>🌟 Details of Partnership - Synergistic Strengths</strong></h2>
<p>Lilypad provides:</p>
<ul>
<li><p>A decentralized compute layer designed for AI workloads: model hosting, inference, and agent workflows</p>
</li>
<li><p>Modular APIs, onchain job execution, and blockchain-native proof-of-compute</p>
</li>
<li><p>A growing library of machine learning models and agentic tooling for public-good science applications</p>
</li>
</ul>
<p>PoscidonDAO brings:</p>
<ul>
<li><p>Deep expertise in DeSci infrastructure, research coordination, and Web3-native scientific funding</p>
</li>
<li><p>A vibrant builder and contributor community focused on regenerative health, science, and AI</p>
</li>
<li><p>Flagship projects like weCura, designed to make decentralized medical AI accessible, transparent, and impactful</p>
</li>
</ul>
<p>Together, we are setting the foundation for a permissionless, collaborative, and evidence-backed approach to decentralized science at scale.</p>
<hr />
<h2 id="heading-practical-synergies-short-term"><strong>🔧 Practical Synergies (Short-Term)</strong></h2>
<ul>
<li><p>Lilypad will serve as the compute layer for PoscidonDAO’s upcoming AI-powered projects, starting with weCura</p>
</li>
<li><p>Researchers and contributors in PoscidonDAO can now deploy models and workflows on Lilypad's decentralized inference engine</p>
</li>
<li><p>Shared infrastructure tooling, co-branded documentation, and public demos of science running onchain</p>
</li>
</ul>
<p>Use cases available now:</p>
<ul>
<li><p>weCura: a decentralized platform for trusted medical AI, executed with Lilypad-backed verifiability</p>
</li>
<li><p>Integration with Rare compute to amplify access to GPU supply for scientific modeling</p>
</li>
<li><p>New experiments using Lilypad to run, benchmark, and publish reproducible compute pipelines in the open</p>
</li>
</ul>
<hr />
<h2 id="heading-roadmap-collaboration-mid-term"><strong>🚀 Roadmap Collaboration (Mid-Term)</strong></h2>
<p>Over the coming months, Lilypad and PoscidonDAO will:</p>
<ul>
<li><p>Build an onchain AI research registry using verifiable Lilypad compute</p>
</li>
<li><p>Co-develop a model standard for publishing, funding, and validating open science algorithms</p>
</li>
<li><p>Launch public quests, hackathons, and contributor bounties for AI+science innovation</p>
</li>
<li><p>Expand weCura’s capabilities across image-based diagnostics, language models, and agentic workflows</p>
</li>
</ul>
<p>What this unlocks:</p>
<ul>
<li><p>Credible, traceable, permissionless compute for DeSci</p>
</li>
<li><p>A path for research DAOs and contributors to publish inference-ready models backed by transparent results</p>
</li>
<li><p>Scalable coordination between GPU supply, open-source algorithms, and incentivized research goals</p>
</li>
</ul>
<hr />
<h2 id="heading-strategic-value-benefits-for-the-ecosystem"><strong>📈 Strategic Value - Benefits for the Ecosystem</strong></h2>
<p>Why this matters:</p>
<ul>
<li><p>Scientific research is compute-intensive and credibility-constrained. Lilypad and PoscidonDAO solve both.</p>
</li>
<li><p>By combining AI-native infrastructure with DeSci-native governance and coordination, we make open science scalable and sustainable</p>
</li>
<li><p>It aligns communities, capital, and computation in a single, public pipeline</p>
</li>
</ul>
<p>Benefits for Web3/OSS:</p>
<ul>
<li><p>Stronger bridges between DePIN, DeSci, and AI ecosystems</p>
</li>
<li><p>Model and workflow standardization for open-source researchers</p>
</li>
<li><p>A new market for verifiable research outputs onchain</p>
</li>
</ul>
<p>Benefits for builders and users:</p>
<ul>
<li><p>Researchers can deploy jobs without centralized gatekeepers</p>
</li>
<li><p>Compute providers can directly support science workloads</p>
</li>
<li><p>Communities can co-own and coordinate progress in health, climate, and biological discovery</p>
</li>
</ul>
<hr />
<h2 id="heading-long-term-vision-broader-narrative-amp-ecosystem-thesis"><strong>🌍 Long-Term Vision – Broader Narrative &amp; Ecosystem Thesis</strong></h2>
<p>This partnership sets the tone for how AI and science can evolve through decentralized infrastructure.</p>
<p>It reflects Lilypad's core mission: making permissionless compute a public good. And Poscidon's mission: restoring trust and accessibility in scientific progress.</p>
<p>Together we will:</p>
<ul>
<li><p>Power regenerative AI in medicine, climate, and bioscience</p>
</li>
<li><p>Set best practices for reproducible, open compute-backed research</p>
</li>
<li><p>Establish a new norm: science and AI that is peer-reviewed by code and executed transparently</p>
</li>
</ul>
<p>This is a blueprint for a new knowledge economy.</p>
<hr />
<h2 id="heading-future-outlook-looking-ahead"><strong>📅 Future Outlook / Looking Ahead</strong></h2>
<p>Immediate next steps:</p>
<ul>
<li><p>Co-announce partnership and initial integration plans</p>
</li>
<li><p>Begin onboarding contributors to use Lilypad for weCura and related Poscidon initiatives</p>
</li>
<li><p>Launch a public roadmap and joint technical showcase</p>
</li>
</ul>
<p>Long-term:</p>
<ul>
<li><p>Expanded DeSci + AI compute economy with bounties, incentives, and shared infrastructure</p>
</li>
<li><p>Coordination across DePIN alliances and open science DAOs</p>
</li>
<li><p>A thriving, verifiable, and permissionless research landscape powered by compute-as-a-public-good</p>
</li>
</ul>
<hr />
<p>🔍 Explore PoscidonDAO: <a target="_blank" href="https://www.poscidondao.com">https://www.poscidondao.com</a></p>
<p>🌎 Deploy DeSci Compute on Lilypad: <a target="_blank" href="https://lilypad.tech">https://lilypad.tech</a></p>
<p>#DeSci #Lilypad #PoscidonDAO #weCura #VerifiableCompute #DecentralizedScience #AIForGood #DePIN #ComputeForScience</p>
]]></content:encoded></item><item><title><![CDATA[Why Decentralized AI (deAI) Will Win]]></title><description><![CDATA[Lilypad believes collaborative deAI infrastructure will outcompete centralized AI platforms by design. Below are the structural advantages and differentiators driving this belief:
💸 1. Native Payment Rails

Definition: Integrated crypto and fiat set...]]></description><link>https://blog.lilypad.tech/why-decentralized-ai-deai-will-win</link><guid isPermaLink="true">https://blog.lilypad.tech/why-decentralized-ai-deai-will-win</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[lilypadnetwork]]></category><category><![CDATA[lilypad]]></category><category><![CDATA[design patterns]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 13 May 2025 13:07:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747141376713/b966c98f-f61c-4a2a-8204-e5d212b37e22.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad believes collaborative deAI infrastructure will outcompete centralized AI platforms by design. Below are the structural advantages and differentiators driving this belief:</p>
<h2 id="heading-1-native-payment-rails">💸 1. Native Payment Rails</h2>
<ul>
<li><p><strong>Definition</strong>: Integrated crypto and fiat settlement mechanisms built into the protocol.</p>
</li>
<li><p><strong>Why it matters</strong>: Unlocks global, frictionless participation: whether by a scientist in Kenya or a GPU provider in Vietnam.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Combines Web3 wallets, Stripe fiat integration, and automated smart contract payments.</p>
</li>
</ul>
<h2 id="heading-2-provenance-pipelines">🧬 2. Provenance Pipelines</h2>
<ul>
<li><p><strong>Definition</strong>: Track model and data lineage, ownership, and usage history.</p>
</li>
<li><p><strong>Why it matters</strong>: Enables auditability, royalty distribution, and bias inspection in remixed AI models.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Cryptographic job verification + on-chain records provide traceable model provenance.</p>
</li>
</ul>
<h2 id="heading-3-permissionless-global-participation">🌍 3. Permissionless Global Participation</h2>
<ul>
<li><p><strong>Definition</strong>: Anyone can host models, contribute compute, or run inference jobs.</p>
</li>
<li><p><strong>Why it matters</strong>: Opens up innovation to solopreneurs, SMEs, and underserved regions.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Full-stack platform (marketplace, job runner, monetization) with open interfaces (CLI/API/GUI).</p>
</li>
</ul>
<h2 id="heading-4-fair-creator-economics">⚖️ 4. Fair Creator Economics</h2>
<ul>
<li><p><strong>Definition</strong>: Transparent, automated revenue sharing for model creators and compute providers.</p>
</li>
<li><p><strong>Why it matters</strong>: Avoids exploitative, opaque compensation models seen in centralized platforms.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Model owners set pricing and earn per use; rewards flow directly to wallets, on-chain.</p>
</li>
</ul>
<h2 id="heading-5-composable-infrastructure">🛠 5. Composable Infrastructure</h2>
<ul>
<li><p><strong>Definition</strong>: Easily integrates with agents, storage, data lakes, and other Web3 systems.</p>
</li>
<li><p><strong>Why it matters</strong>: Enables dynamic pipelines, collaborative AI stacks, and network effects.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Designed for plug-and-play interoperability with agent frameworks, Filecoin, Vana, and more.</p>
</li>
</ul>
<h2 id="heading-6-censorship-resistance">🔐 6. Censorship Resistance</h2>
<ul>
<li><p><strong>Definition</strong>: No centralized authority can restrict model deployment, data use, or job execution.</p>
</li>
<li><p><strong>Why it matters</strong>: Protects open scientific research, politically sensitive tools, and grassroots innovation.</p>
</li>
<li><p><strong>Lilypad edge</strong>: Fully on-chain execution and settlement ensures verifiable, tamper-proof job provenance.</p>
</li>
</ul>
<h2 id="heading-7-designed-for-the-future">🧠 7. Designed for the Future</h2>
<ul>
<li><p><strong>Context</strong>: Proprietary AI moats are collapsing. The AI economy is shifting to: Custom, fine-tuned models Agentic workflows User-owned infrastructure</p>
</li>
<li><p><strong>Lilypad belief</strong>: Only open, verifiable, and economically aligned platforms can scale with this explosion.</p>
</li>
</ul>
<p>Lilypad’s Core Philosophy</p>
<blockquote>
<p>"AI should be a public good - not a corporate asset."</p>
</blockquote>
<p>As a full stack AI services platform, Lilypad provides a model marketplace, MLops tooling, and a distributed, on-demand compute network for scaling AI inference for ML pipelines, agent workflows and more.</p>
<p>Let’s build (actually) open AI together.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Uprising]]></title><description><![CDATA[Lilypad is proud to announce a partnership with Uprising Labs, a cutting-edge game publisher accelerator building the next generation of AI-native games on-chain. As Uprising powers the Dream Catalyst Accelerator and supports titles in the Somnia eco...]]></description><link>https://blog.lilypad.tech/lilypad-x-uprising</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-uprising</guid><category><![CDATA[Web3]]></category><category><![CDATA[gaming]]></category><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[#computernetwork ]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Mon, 12 May 2025 12:55:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746780345114/4401b430-89fe-4599-90bc-05960ad5d1a7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is proud to announce a partnership with <strong>Uprising Labs</strong>, a cutting-edge game publisher accelerator building the next generation of AI-native games on-chain. As Uprising powers the Dream Catalyst Accelerator and supports titles in the Somnia ecosystem, we see a clear alignment: modular AI + immersive game worlds = next-gen play.</p>
<p>Together, we’re unlocking what’s next in on-chain gameplay: adaptive NPCs, generative game content, and dynamic, intelligent player experiences.</p>
<h2 id="heading-2-meet-our-partner-uprising-labs"><strong>🕹️ 2. Meet Our Partner: Uprising Labs</strong></h2>
<p>Uprising Labs is not just another Web3 gaming studio—it’s a full-stack platform for bootstrapping and accelerating Web3-native games. With the <strong>Dream Catalyst Accelerator</strong>, Uprising identifies and supports studios building the future of gaming, layered with AI, social gameplay, and economic primitives.</p>
<p>Built on the <strong>Somnia chain</strong>, Uprising is positioned to deliver ultra-performant, persistent game worlds with a thriving creator economy at their core. Their mission: empower studios to launch AI-infused, multiplayer-first experiences that scale.</p>
<h2 id="heading-3-shared-vision-ai-x-gaming-x-on-chain-fun"><strong>🧭 3. Shared Vision: AI x Gaming x On-Chain Fun</strong></h2>
<p>Lilypad and Uprising Labs share a belief in the future of decentralized, intelligent games:</p>
<ul>
<li><p>Games should be dynamic, emergent, and player-shaped</p>
</li>
<li><p>Infrastructure should be modular, permissionless, and scalable</p>
</li>
<li><p>AI should power more than just characters—it should power creativity, commerce, and community</p>
</li>
</ul>
<p>Lilypad offers decentralized GPU compute, model hosting, and AI APIs. Uprising delivers game development velocity, publishing infrastructure, and community reach.</p>
<p>Together, we’re building the stack for playable, intelligent, verifiable game worlds.</p>
<h2 id="heading-4-synergistic-strengths"><strong>🛠️ 4. Synergistic Strengths</strong></h2>
<h3 id="heading-what-lilypad-brings"><strong>What Lilypad Brings:</strong></h3>
<ul>
<li><p>Permissionless AI compute infrastructure</p>
</li>
<li><p>Model marketplace for inference, generation, and agent execution</p>
</li>
<li><p>Job-based architecture for real-time and batch game AI tasks</p>
</li>
</ul>
<h3 id="heading-what-uprising-brings"><strong>What Uprising Brings:</strong></h3>
<ul>
<li><p>Network of top-tier game studios and builders</p>
</li>
<li><p>Publishing stack and go-to-market support via Dream Catalyst</p>
</li>
<li><p>On-chain gaming execution and infrastructure (via Somnia)</p>
</li>
</ul>
<h3 id="heading-short-term-synergies"><strong>🔧 Short-Term Synergies</strong></h3>
<ul>
<li><p>Co-piloted AI NPC demos with Lilypad-hosted models and Uprising games</p>
</li>
<li><p>AI tooling made easily available for Uprising game teams to use for for procedural generation, AI dialogue, or adaptive gameplay</p>
</li>
<li><p>Tutorials and pilot programs for Lilypad model devs to plug into Uprising-supported studios</p>
</li>
</ul>
<h3 id="heading-mid-term-roadmap"><strong>🚀 Mid-Term Roadmap</strong></h3>
<ul>
<li><p>AI Game Jam with co-hosted teams, prize pools, and launch paths via Dream Catalyst</p>
</li>
<li><p>Lilypad integration into Uprising’s SDK for plug-and-play AI workflows</p>
</li>
<li><p>Joint development of AI agents that evolve alongside players in persistent worlds</p>
</li>
</ul>
<h2 id="heading-5-why-this-matters"><strong>🧠 5. Why This Matters</strong></h2>
<p>This partnership expands the decentralized AI stack into one of its most important domains: <strong>entertainment</strong>.</p>
<ul>
<li><p>Gives developers new ways to build responsive, living game worlds</p>
</li>
<li><p>Offers game studios new revenue models by embedding monetizable AI</p>
</li>
<li><p>Powers creator-owned, player-shaped metaverses built on the Somnia chain</p>
</li>
</ul>
<p>It also drives real demand for decentralized compute, reinforcing the flywheel of verifiable job execution, open infra, and on-chain intelligence.</p>
<h2 id="heading-6-ai-for-gaming-new-adventures-begin"><strong>🎮 6. AI for Gaming: New Adventures Begin</strong></h2>
<p>Here are just a few examples of what’s now possible:</p>
<ul>
<li><p>🎭 <strong>Dynamic NPCs</strong> that remember, react, and grow with player interactions</p>
</li>
<li><p>🗺️ <strong>Procedural map and item generation</strong> for infinite replayability</p>
</li>
<li><p>🧠 <strong>Game Masters and narrative agents</strong> for emergent stories</p>
</li>
<li><p>🎮 <strong>Skill-adjusted difficulty</strong> tuned in real-time</p>
</li>
<li><p>👾 <strong>Voice and toxicity detection</strong> for fairer multiplayer</p>
</li>
<li><p>💰 <strong>AI monetization agents</strong> for in-game economies</p>
</li>
</ul>
<p>Lilypad Modular Minds meet the Uprising dream: playable agents, generative fun, and infinitely remixable gameplay.</p>
<h2 id="heading-7-long-term-ecosystem-vision"><strong>🌍 7. Long-Term Ecosystem Vision</strong></h2>
<p>AI is not just a back-end tool—it’s the future of how games will be designed, played, and evolved.</p>
<ul>
<li><p>Uprising powers the studios and launchpads</p>
</li>
<li><p>Somnia delivers execution and economy rails</p>
</li>
<li><p>Lilypad brings the compute, APIs, and agentic logic</p>
</li>
</ul>
<p>This partnership sets the foundation for community-shaped, AI-native, on-chain games that think, adapt, and surprise.</p>
<h2 id="heading-8-looking-ahead"><strong>📅 8. Looking Ahead</strong></h2>
<ul>
<li><p>AI NPC showcase launching soon with Uprising studio partner</p>
</li>
<li><p>Co-hosted AI Game Jam to onboard new AI x game developers</p>
</li>
<li><p>Uprising x Lilypad partner track in Dream Catalyst</p>
</li>
</ul>
<p>🌐 Learn more: <a target="_blank" href="https://uprisinglabs.io/">uprisinglabs.io</a>🧠 Start building: <a target="_blank" href="https://lilypad.tech/">lilypad.tech</a></p>
<p>Let’s build playable AI—modular, creative, and unstoppable.</p>
<p>#AIxGaming #Web3Games #Lilypad #UprisingLabs #Somnia #GameJam #ModularMinds</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Swan]]></title><description><![CDATA[Lilypad is thrilled to announce a new strategic partnership with Swan - a decentralized compute marketplace and incentive platform for GPU providers, ecosystem partners, and Web3 communities.
Together, we are connecting Swan’s compute leasing and com...]]></description><link>https://blog.lilypad.tech/lilypad-x-swan</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-swan</guid><category><![CDATA[AI]]></category><category><![CDATA[Web3]]></category><category><![CDATA[#computernetwork ]]></category><category><![CDATA[agentic AI]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Sat, 10 May 2025 10:00:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745994444914/ce612eba-14da-4aa6-b94b-14d5b1517022.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is thrilled to announce a new strategic partnership with <a target="_blank" href="https://swanchain.io"><strong>Swan</strong></a> - a decentralized compute marketplace and incentive platform for GPU providers, ecosystem partners, and Web3 communities.</p>
<p>Together, we are connecting Swan’s compute leasing and community questing systems with Lilypad’s decentralized AI-first job architecture to grow supply, bootstrap usage, and ignite public awareness.</p>
<h2 id="heading-details-of-partnership-synergistic-strengths"><strong>Details of Partnership - Synergistic Strengths</strong></h2>
<p><strong>Lilypad</strong> provides:</p>
<ul>
<li><p>Decentralized compute infrastructure purpose-built for AI model hosting, inference, and agentic workflows</p>
</li>
<li><p>A modular API-first job engine with blockchain-coordinated payments and provenance</p>
</li>
<li><p>A growing network of builders deploying ML modules, AI agents, and custom applications</p>
</li>
</ul>
<p><strong>Swan</strong> brings:</p>
<ul>
<li><p>Global access to GPU supply through trusted relationships with compute providers</p>
</li>
<li><p>Infrastructure orchestration tools optimized for Web3-native, demand-driven deployments</p>
</li>
<li><p>A track record of supporting builders and compute-focused projects through technical and business support</p>
</li>
</ul>
<p>Together, Lilypad and Swan create a pipeline from GPU liquidity to application-layer value, with integrated tooling for both compute provisioning and viral ecosystem growth.</p>
<h2 id="heading-practical-synergies-short-term"><strong>Practical Synergies (Short-Term)</strong></h2>
<ul>
<li><p>Co-launch of marketing quest initiatives, allowing users to earn points across both Swan and Lilypad ecosystems</p>
</li>
<li><p>AI model developers can deploy on Lilypad with guaranteed access to compute</p>
</li>
<li><p>Compute providers participating in Swan can route idle GPU to real AI workloads on-chain</p>
</li>
</ul>
<p>Use cases available now:</p>
<ul>
<li><p>AI developers can deploy models to Lilypad and rely on Swan-backed GPU supply</p>
</li>
<li><p>Web3 users can earn incentives through cross-ecosystem campaign quests</p>
</li>
<li><p>Both communities can co-earn rewards for cross-promotion and usage</p>
</li>
</ul>
<h2 id="heading-roadmap-collaboration-mid-term"><strong>Roadmap Collaboration (Mid-Term)</strong></h2>
<p>In the coming months, Lilypad and Swan will:</p>
<ul>
<li><p>Expand the integration of Swan-sourced GPU into Lilypad's verifiable compute network</p>
</li>
<li><p>Launch joint case studies showcasing public infrastructure powering real AI workloads</p>
</li>
<li><p>Provide builders with simplified onboarding documentation and real-time provider benchmarking dashboards</p>
</li>
</ul>
<p>Together, we are building the connective tissue between decentralized compute sourcing and decentralized AI execution.</p>
<h2 id="heading-strategic-value-benefits-for-the-ecosystem"><strong>Strategic Value - Benefits for the Ecosystem</strong></h2>
<p>This partnership unlocks:</p>
<ul>
<li><p>Steady supply for AI workloads with global distribution</p>
</li>
<li><p>Confidence for builders deploying latency-sensitive jobs</p>
</li>
<li><p>A working model of decentralized coordination between compute sourcing and AI application layers</p>
</li>
</ul>
<p>Benefits for the Web3 and OSS ecosystem:</p>
<ul>
<li><p>Demonstrates modular composability between infra providers</p>
</li>
<li><p>Provides a blueprint for sustainable DePIN incentive alignment</p>
</li>
</ul>
<p>Benefits for developers and users:</p>
<ul>
<li><p>Easier deployment of AI models without worrying about compute constraints</p>
</li>
<li><p>Greater geographic availability of compute nodes</p>
</li>
<li><p>Lower friction, cost, and wait time for AI job execution</p>
</li>
</ul>
<h2 id="heading-long-term-vision-open-compute-ai-collaboration"><strong>Long-Term Vision - Open Compute + AI Collaboration</strong></h2>
<p>We believe infrastructure primitives like compute, storage, orchestration, and incentives should not be siloed.</p>
<p>Lilypad and Swan are demonstrating how aligned ecosystems can:</p>
<ul>
<li><p>Bootstrap liquidity across multiple layers: GPU, agent, and community</p>
</li>
<li><p>Design public incentive systems that reward participation and usage</p>
</li>
<li><p>Co-develop the playbook for next-generation decentralized infrastructure projects</p>
</li>
</ul>
<p>Together, we are contributing to a future where compute is as composable, decentralized, and accessible as code.</p>
<h2 id="heading-future-outlook"><strong>Future Outlook</strong></h2>
<p>Immediate next steps:</p>
<ul>
<li><p>Announce partnership publicly across channels</p>
</li>
<li><p>Launch first round of Lilypad x Swan quests and community incentive tracks</p>
</li>
<li><p>Integrate Swan GPUs into the Lilypad provider onboarding flow</p>
</li>
</ul>
<p>Long-term:</p>
<ul>
<li><p>Co-managed decentralized compute incentive systems</p>
</li>
<li><p>Ongoing joint token programs and community campaigns</p>
</li>
<li><p>Deeper protocol-level integration between Swan job leasing and Lilypad's verifiable execution layer</p>
</li>
</ul>
<hr />
<p>📍 Explore Swan: <a target="_blank" href="https://swanchain.io">swanchain.io</a>📍 Deploy on Lilypad: <a target="_blank" href="https://lilypad.tech">lilypad.tech</a></p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Akave]]></title><description><![CDATA[Lilypad and Akave are thrilled to announce a formal strategic partnership that brings together two foundational primitives of the decentralized AI stack: compute and storage. This collaboration is a bold step toward building a truly modular, communit...]]></description><link>https://blog.lilypad.tech/lilypad-x-akave</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-akave</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[inference]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[decentralized-ai]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Tue, 06 May 2025 10:00:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745545499915/14e6494b-da81-49f7-b279-d300c374da54.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad and Akave are thrilled to announce a formal strategic partnership that brings together two foundational primitives of the decentralized AI stack: compute and storage. This collaboration is a bold step toward building a truly modular, community-owned AI infrastructure that rivals centralized platforms.</p>
<p>As Lilypad continues to evolve into the nexus of a decentralized AI cooperative, this alliance with Akave strengthens our ability to support verifiable workflows, secure model outputs, data provenance, and composable AI pipelines.</p>
<h2 id="heading-2-meet-our-partner-akave"><strong>2. Meet Our Partner: Akave</strong></h2>
<p>Akave is a Filecoin Layer 2 decentralized data management network that enables secure, programmable data storage, access, and monetization. Designed to empower the next generation of data-driven applications and marketplaces, Akave delivers cost-effective and performant decentralized storage for AI, Web3, and DePIN ecosystems.</p>
<p>Akave’s architecture supports both public and permissioned data buckets, policy-enforced access, and full data provenance—making it the perfect partner for AI systems that require secure storage and traceable lineage.</p>
<h2 id="heading-3-shared-vision-infrastructure-for-open-ai"><strong>3. Shared Vision: Infrastructure for Open AI</strong></h2>
<p>Akave and Lilypad are aligned by a clear and ambitious goal: building infrastructure for decentralized intelligence.</p>
<ul>
<li><p><strong>Decentralization-first</strong>: Both platforms reduce dependency on opaque, centralized cloud providers</p>
</li>
<li><p><strong>Composable AI primitives</strong>: Compute and storage as modular services</p>
</li>
<li><p><strong>Data provenance and monetization</strong>: Transparent value flows and auditability baked into system design</p>
</li>
<li><p><strong>AI accessibility</strong>: Empower developers and creators with verifiable infrastructure at every layer</p>
</li>
</ul>
<p>This partnership is more than integration—it’s the emergence of a decentralized foundation for building, training, and deploying AI applications.</p>
<h2 id="heading-4-synergistic-strengths"><strong>4. Synergistic Strengths</strong></h2>
<h3 id="heading-what-lilypad-brings"><strong>What Lilypad Brings:</strong></h3>
<ul>
<li><p>Serverless decentralized GPU compute</p>
</li>
<li><p>On-demand model hosting, inference APIs, and agent workflows</p>
</li>
<li><p>Verifiable job execution and API-first architecture</p>
</li>
</ul>
<h3 id="heading-what-akave-brings"><strong>What Akave Brings:</strong></h3>
<ul>
<li><p>Decentralized object storage with programmable data buckets</p>
</li>
<li><p>Full support for on-chain provenance and usage policies</p>
</li>
<li><p>Optimized infrastructure for large LLM and ML workloads</p>
</li>
</ul>
<h3 id="heading-short-term-use-cases"><strong>🔧 Short-Term Use Cases</strong></h3>
<ul>
<li><p><strong>RAG Integration Pilot</strong>: Showcase a reference architecture for retrieval-augmented generation using Akave-stored data and Lilypad inference</p>
</li>
<li><p><strong>Model Caching</strong>: Use Akave to store fine-tuned models and intermediate outputs</p>
</li>
<li><p><strong>Job Output Storage</strong>: Enable users to preserve inference results and synthetic datasets with cryptographic traceability</p>
</li>
</ul>
<p><img src="https://cdn.discordapp.com/attachments/1212898458885685308/1369338993493414049/CleanShot_2025-05-06_at_08.43.452x.png?ex=681b7fd1&amp;is=681a2e51&amp;hm=8c88796fb7bda0144a057c9b5d5d8723c5bb005384ea5f6e80bffeaf0f8a886b&amp;" alt /></p>
<h3 id="heading-mid-term-roadmap"><strong>🚀 Mid-Term Roadmap</strong></h3>
<ul>
<li><p><strong>Plugin-like Integration</strong>: Build native ingestion and retrieval pipelines between Akave and Lilypad</p>
</li>
<li><p><strong>Co-hosted Agent Workflows</strong>: Empower developers to deploy agents that retrieve, compute, and store data using both networks</p>
</li>
<li><p><strong>Synthetic Data Provenance</strong>: Enable tracking and monetization of AI-generated data across the full pipeline</p>
</li>
<li><p><strong>Reputation Transparency</strong>: Store Lilypad job provider statistics and performance metadata using Akave’s provenance infrastructure</p>
</li>
</ul>
<h2 id="heading-5-strategic-impact"><strong>5. Strategic Impact</strong></h2>
<p>This collaboration delivers:</p>
<ul>
<li><p><strong>A fully modular decentralized AI stack</strong></p>
</li>
<li><p><strong>Real value for users</strong>: fast, flexible, censorship-resistant compute and storage</p>
</li>
<li><p><strong>Clear provenance</strong> for models and data outputs, a critical need for commercial and regulated AI</p>
</li>
<li><p><strong>Blueprints for the ecosystem</strong>: example architectures that others can replicate and build on</p>
</li>
</ul>
<p>Together, Lilypad and Akave move the ecosystem beyond theory to execution, showing what a real decentralized AI infrastructure can look like.</p>
<h2 id="heading-6-long-term-ecosystem-vision"><strong>6. Long-Term Ecosystem Vision</strong></h2>
<p>This is only the beginning. The long-term vision is a global, modular, permissionless infrastructure layer for AI:</p>
<ul>
<li><p>Lilypad powers verifiable computation and AI agent execution</p>
</li>
<li><p>Akave anchors decentralized storage and data integrity</p>
</li>
<li><p>Together, they offer an open foundation to train, fine-tune, and deploy models with full transparency and composability</p>
</li>
</ul>
<p>With upcoming POCs such as Waterlily 2.0 (artist attribution via model fine-tuning) and data-to-agent reference architectures, the potential for future impact is vast.</p>
<h2 id="heading-7-looking-ahead"><strong>7. Looking Ahead</strong></h2>
<p>This announcement is only the first beat of a new rhythm. The decentralized AI stack is forming.</p>
<ul>
<li><p>🌐 Signup for Akave testnet: <a target="_blank" href="https://www.akave.ai/testnet">https://akave.ai/testnet</a></p>
</li>
<li><p>🧠Join Lilypad ecosystem to deploy, run, and monetize models: https://docs.lilypad.tech</p>
</li>
</ul>
<p>Let’s build decentralized intelligence infrastructure - together.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad + Chirper.ai]]></title><description><![CDATA[Lilypad is thrilled to announce a groundbreaking partnership with Chirper.ai, the world's first social network exclusively for AI agents.
This collaboration unites Lilypad's decentralized compute infrastructure with Chirper's autonomous agent ecosyst...]]></description><link>https://blog.lilypad.tech/lilypad-chirperai</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-chirperai</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Web3]]></category><category><![CDATA[Cryptocurrency]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Sun, 04 May 2025 02:01:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746261644922/9a46f098-5e03-4a7d-8309-f057a0927da8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is thrilled to announce a groundbreaking partnership with <a target="_blank" href="https://chirper.ai">Chirper.ai</a>, the world's first social network exclusively for AI agents.</p>
<p>This collaboration unites Lilypad's decentralized compute infrastructure with Chirper's autonomous agent ecosystem, paving the way for a new era of decentralized AI applications.</p>
<blockquote>
<p>⚠️ Stay tuned: the next thing we launch together will be something truly special. We’re already working on a powerful proof-of-concept to showcase what Lilypad + Chirper unlocks.</p>
<p>💰 Want in? You can now buy $CHIRP on <a target="_blank" href="https://raydium.io/">Raydium</a> to join the agentic revolution.</p>
</blockquote>
<hr />
<h2 id="heading-details-of-partnership-synergistic-strengths">🌟 Details of Partnership - Synergistic Strengths</h2>
<p><strong>Lilypad provides:</strong></p>
<ul>
<li><p>Decentralised compute infrastructure tailored for AI model hosting, inference, and agentic workflows</p>
</li>
<li><p>A modular, API-first job engine with blockchain-coordinated payments and on-chain provenance</p>
</li>
<li><p>A permissionless platform for AI developers to deploy, monetize, and scale models without intermediaries</p>
</li>
</ul>
<p><strong>Chirper.ai brings:</strong></p>
<ul>
<li><ul>
<li><p>A vibrant ecosystem of autonomous AI agents engaging in social interactions without human intervention</p>
<ul>
<li><p>Innovative tokenomics through the $CHIRP token, enabling agent-based economies and value creation</p>
</li>
<li><p>A platform fostering the development of agentic DAOs and collaborative AI behaviours</p>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p><strong>Combined strengths:</strong></p>
<ul>
<li><p>Integration of Lilypad's compute resources to power Chirper's AI agents, enhancing performance and scalability</p>
</li>
<li><p>Collaboration on developing decentralized applications that leverage both compute and autonomous agent technologies</p>
</li>
<li><p>Joint efforts to expand the AI and Web3 developer communities through shared tools and resources</p>
</li>
</ul>
<hr />
<h2 id="heading-practical-synergies-short-term">🔧 Practical Synergies (Short-Term)</h2>
<ul>
<li><p><strong>Compute Integration:</strong> Chirper's AI agents will utilize Lilypad's decentralized compute network for enhanced processing capabilities</p>
</li>
<li><p><strong>Developer Tools:</strong> Launch of co-branded tools and SDKs to facilitate the creation and deployment of AI agents on the combined platform</p>
</li>
<li><p><strong>Community Engagement:</strong> Joint campaigns to engage developers and users in building and interacting with autonomous AI agents</p>
</li>
</ul>
<p><strong>Immediate benefits:</strong></p>
<ul>
<li><p>Improved performance and scalability for Chirper's AI agents through access to Lilypad's compute resources</p>
</li>
<li><p>Expanded opportunities for developers to create and monetize AI agents in a decentralized environment</p>
</li>
<li><p>Enhanced user experiences through more responsive and capable AI interactions</p>
</li>
</ul>
<hr />
<h2 id="heading-roadmap-collaboration-mid-term">🚀 Roadmap Collaboration (Mid-Term)</h2>
<ul>
<li><p><strong>Agentic DAOs:</strong> Development of decentralized autonomous organizations composed of AI agents, leveraging both platforms' technologies</p>
</li>
<li><p><strong>Tokenomics Integration:</strong> Alignment of Lilypad's and Chirper's token economies to incentivize participation and value creation</p>
</li>
<li><p><strong>Cross-Platform Applications:</strong> Creation of applications that utilize Chirper's AI agents and Lilypad's compute infrastructure for various use cases, including gaming, social networking, and decentralized finance</p>
</li>
</ul>
<p><strong>Upcoming milestones:</strong></p>
<ul>
<li><p>Release of joint whitepapers and technical documentation outlining the integrated platform's capabilities</p>
</li>
<li><p>Hosting of hackathons and developer events to foster innovation and collaboration within the community</p>
</li>
<li><p>Expansion of the combined ecosystem through partnerships with other AI and Web3 projects</p>
</li>
</ul>
<hr />
<h2 id="heading-strategic-value-benefits-for-the-ecosystem">📈 Strategic Value - Benefits for the Ecosystem</h2>
<p><strong>Why this is strategic:</strong></p>
<ul>
<li><p>Combines decentralized compute and autonomous AI agents to create a robust platform for AI application development</p>
</li>
<li><p>Fosters innovation by providing developers with the tools and infrastructure needed to build next-generation AI solutions</p>
</li>
<li><p>Strengthens the Web3 ecosystem by demonstrating the practical applications of decentralized technologies in AI</p>
</li>
</ul>
<p><strong>Benefits for Web3/OSS:</strong></p>
<ul>
<li><p>Introduction of new use cases for decentralized technologies in AI and social networking</p>
</li>
<li><p>Increased credibility and adoption of decentralized AI solutions through real-world applications</p>
</li>
<li><p>Enhanced composability of infrastructure, enabling seamless integration with other Web3 projects</p>
</li>
</ul>
<p><strong>Benefits for developers/users:</strong></p>
<ul>
<li><p>Access to a powerful platform for building, deploying, and monetizing AI agents</p>
</li>
<li><p>Simplified integration of AI capabilities into decentralized applications</p>
</li>
<li><p>Opportunities to participate in and benefit from the growth of the decentralized AI economy</p>
</li>
</ul>
<hr />
<h2 id="heading-long-term-vision-broader-narrative-amp-ecosystem-thesis">🌍 Long-Term Vision – Broader Narrative &amp; Ecosystem Thesis</h2>
<p>This partnership represents a significant step toward a future where AI infrastructure is decentralized, modular, and accessible to all.</p>
<p>By combining Lilypad's compute network with Chirper's autonomous agent ecosystem, we are laying the foundation for a new paradigm in AI development and interaction.</p>
<p><strong>Shared goals:</strong></p>
<ul>
<li><p>Empower developers to create AI agents that operate independently and collaboratively in decentralized environments</p>
</li>
<li><p>Promote the adoption of decentralized technologies in AI applications across various industries</p>
</li>
<li><p>Foster a vibrant community of developers, users, and stakeholders committed to building the future of AI</p>
</li>
</ul>
<hr />
<h2 id="heading-future-outlook-looking-ahead">📅 Future Outlook / Looking Ahead</h2>
<p><strong>Immediate next steps:</strong></p>
<ul>
<li><p>Integration of Lilypad's compute resources into Chirper's platform</p>
</li>
<li><p>Launch of co-branded developer tools and resources</p>
</li>
<li><p>Initiation of community engagement campaigns to promote the partnership and its benefits</p>
</li>
</ul>
<p><strong>Long-term vision:</strong></p>
<ul>
<li><p>Establishment of a decentralized ecosystem where AI agents can operate, collaborate, and evolve autonomously</p>
</li>
<li><p>Expansion of the platform's capabilities to support a wide range of AI applications and use cases</p>
</li>
<li><p>Continued collaboration with other projects and communities to drive innovation and adoption of decentralized AI solutions</p>
</li>
</ul>
<p><strong>Call to Action:</strong></p>
<ul>
<li><p>Explore Chirper.ai: <a target="_blank" href="https://chirper.ai">https://chirper.ai</a></p>
</li>
<li><p>Deploy on Lilypad: <a target="_blank" href="https://lilypad.tech">https://lilypad.tech</a></p>
</li>
<li><p>Buy $CHIRP on <a target="_blank" href="https://raydium.io">Raydium</a></p>
</li>
<li><p>Join our communities and be part of the decentralized AI revolution</p>
</li>
</ul>
<hr />
<p>Together, Lilypad and Chirper.ai are redefining the boundaries of AI and decentralization. Join us as we build the future of autonomous AI economies.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Baselight]]></title><description><![CDATA[Lilypad and Baselight are proud to announce a formal strategic partnership aimed at accelerating the development of decentralized AI infrastructure. This collaboration represents a high-conviction commitment to shaping a modular, composable future fo...]]></description><link>https://blog.lilypad.tech/lilypad-x-baselight</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-baselight</guid><category><![CDATA[AI]]></category><category><![CDATA[data]]></category><category><![CDATA[decentralized-ai]]></category><category><![CDATA[compute]]></category><category><![CDATA[lilypadnetwork]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Wed, 30 Apr 2025 10:00:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745544106096/871ae1b3-baeb-4b26-98af-955071b58649.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad and Baselight are proud to announce a formal strategic partnership aimed at accelerating the development of decentralized AI infrastructure. This collaboration represents a high-conviction commitment to shaping a modular, composable future for intelligent systems—anchored in transparency, permissionless access, and community ownership.</p>
<p>This alliance is part of Lilypad’s broader ecosystem strategy to bring together every critical layer—compute, models, and now structured data—into an open innovation economy for AI.</p>
<h2 id="heading-2-meet-baselight"><strong>2. Meet Baselight</strong></h2>
<p>Baselight is a decentralized platform for structured data discovery, querying, and monetization. With over 27 billion rows of data, 140,000 tables, and integrations across DeFi, DeSci, and Web3, Baselight enables developers and researchers to build insights, dashboards, and intelligent agents using high-quality, accessible datasets.</p>
<p>Built by Finisterra Labs and backed by Haun Ventures and Lightshift Capital, Baselight is laying the foundation for a globally distributed, transparent data layer for AI.</p>
<h2 id="heading-3-shared-vision-unlocking-open-intelligence"><strong>3. Shared Vision: Unlocking Open Intelligence</strong></h2>
<p>Lilypad and Baselight are deeply aligned in their mission to decentralize core infrastructure for artificial intelligence. Both teams believe in:</p>
<ul>
<li><p>Decentralization as a catalyst for innovation</p>
</li>
<li><p>Open infrastructure as a public good</p>
</li>
<li><p>Data and model provenance as critical primitives for trustworthy AI</p>
</li>
</ul>
<p>Lilypad provides decentralized compute, AI Models and agent execution, while Baselight delivers the structured, queryable data needed to fuel model development, RAG systems, and real-time analytics. Together, the two platforms enable a vertically integrated decentralized AI pipeline.</p>
<h2 id="heading-4-synergistic-strengths"><strong>4. Synergistic Strengths</strong></h2>
<h3 id="heading-lilypads-capabilities"><strong>Lilypad’s Capabilities:</strong></h3>
<ul>
<li><p>Distributed GPU compute network with job-based pricing</p>
</li>
<li><p>Permissionless deployment of AI models with API endpoints</p>
</li>
<li><p>Full-stack infrastructure for inference, RAG, MCP, fine-tuning, and agent workflows</p>
</li>
</ul>
<h3 id="heading-baselights-capabilities"><strong>Baselight’s Capabilities:</strong></h3>
<ul>
<li><p>Massive structured data catalog with 1-click query interface</p>
</li>
<li><p>SQL-native analytics and soon-to-launch data APIs</p>
</li>
<li><p>Transparent licensing, monetization, and versioned provenance</p>
</li>
</ul>
<h3 id="heading-short-term-use-cases"><strong>🔧 Short-Term Use Cases</strong></h3>
<ul>
<li><p>Immediate demos of Lilypad-hosted LLMs accessing Baselight data in RAG workflows</p>
</li>
<li><p>Co-branded tutorials showcasing Baselight’s structured datasets as inputs for deployed models</p>
</li>
<li><p>Co-marketing initiatives to raise awareness around structured data’s role in decentralized AI</p>
</li>
</ul>
<h3 id="heading-mid-term-roadmap"><strong>🚀 Mid-Term Roadmap</strong></h3>
<ul>
<li><p>Native API integrations for streamlined Lilypad-Baselight developer pipelines</p>
</li>
<li><p>Preprocessing models deployed on Lilypad to normalize public Baselight datasets</p>
</li>
<li><p>Opt-in flow for pushing Baselight data into vector indexes for real-time agent querying</p>
</li>
<li><p>Token-aligned revenue share and staking infrastructure tied to data and model contributions</p>
</li>
</ul>
<h2 id="heading-5-strategic-impact"><strong>5. Strategic Impact</strong></h2>
<p>This partnership creates a robust, composable stack for the decentralized AI economy:</p>
<ul>
<li><p>Model + Compute + Data in one programmable pipeline</p>
</li>
<li><p>Greater accessibility for developers and researchers</p>
</li>
<li><p>Verified provenance from data ingestion to inference outputs</p>
</li>
<li><p>Cross-platform monetization opportunities for both data and model contributors</p>
</li>
</ul>
<p>It also signals to the broader ecosystem that decentralization is not just viable—it’s operational and performant.</p>
<h2 id="heading-6-long-term-ecosystem-vision"><strong>6. Long-Term Ecosystem Vision</strong></h2>
<p>This collaboration represents a foundational move toward an open-source intelligence stack. Structured data fuels the reasoning layer of intelligent systems. Compute makes it actionable. Together, Lilypad and Baselight unlock:</p>
<ul>
<li><p>Agentic pipelines powered by verifiable data</p>
</li>
<li><p>Training pipelines that preserve IP and open-access distribution</p>
</li>
<li><p>A new model for trust-aligned, modular infrastructure in the AI economy</p>
</li>
</ul>
<p>Baselight will become the trusted repository for public and permissioned structured data. Lilypad will ensure this data can be processed, queried, and monetized through composable compute workflows.</p>
<h2 id="heading-7-looking-ahead"><strong>7. Looking Ahead</strong></h2>
<p>Explore Baselight: <a target="_blank" href="https://baselight.ai/">baselight.ai</a>Explore Lilypad: <a target="_blank" href="https://lilypad.tech/">lilypad.tech</a></p>
<p>Join us as we help define the next era of AI—one built on open infrastructure, transparent coordination, and verifiable trust.</p>
<p>Let’s build decentralized intelligence infrastructure - together.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Hive Intelligence]]></title><description><![CDATA[1. Introduction
Lilypad is thrilled to announce a strategic partnership with Hive Intelligence, a project that shares our mission to decentralize AI from the ground up. This collaboration marks a pivotal step toward building composable, agentic ecosy...]]></description><link>https://blog.lilypad.tech/lilypad-x-hive-intelligence</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-hive-intelligence</guid><category><![CDATA[AI]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[Web3]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Sat, 26 Apr 2025 00:30:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1745547806405/c03b25ec-5af6-4b96-88e6-a95c07b3feab.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-1-introduction"><strong>1. Introduction</strong></h2>
<p>Lilypad is thrilled to announce a strategic partnership with <strong>Hive Intelligence</strong>, a project that shares our mission to decentralize AI from the ground up. This collaboration marks a pivotal step toward building composable, agentic ecosystems powered by open data, decentralized compute, and permissionless intelligence.</p>
<p>Together, we are setting a new standard for AI networks that are not only powerful, but modular, traceable, and self-evolving.</p>
<h2 id="heading-2-meet-our-partner-hive-intelligence"><strong>2. Meet Our Partner: Hive Intelligence</strong></h2>
<p>Hive Intelligence is a decentralized ecosystem for collective machine intelligence. By combining the power of individual agents with a reputation-based coordination layer, Hive enables a swarm of autonomous entities to work collaboratively across tasks, domains, and applications.</p>
<p>It’s not just about running an agent. It’s about coordinating thousands of them into purposeful, intelligent collectives—complete with memory, incentives, and composable structure.</p>
<h2 id="heading-3-shared-vision-the-cooperative-ai-stack"><strong>3. Shared Vision: The Cooperative AI Stack</strong></h2>
<p>Lilypad and Hive Intelligence share a belief in open, collaborative intelligence infrastructure:</p>
<ul>
<li><p>AI should be modular and composable, not locked into black boxes</p>
</li>
<li><p>Networks should reward contribution and cooperation, not centralize control</p>
</li>
<li><p>Developers, researchers, and communities should own the tools of intelligence</p>
</li>
</ul>
<p>Lilypad provides the decentralized compute substrate—hosting, executing, and monetizing models and agents. Hive brings the orchestration, collective reasoning, and agent-to-agent collaboration that turns those models into living systems.</p>
<p>Together, we enable a new category: decentralized cooperative AI.</p>
<h2 id="heading-4-synergistic-strengths"><strong>4. Synergistic Strengths</strong></h2>
<h3 id="heading-what-lilypad-brings"><strong>What Lilypad Brings:</strong></h3>
<ul>
<li><p>Globally distributed GPU compute</p>
</li>
<li><p>On-demand job execution for inference, fine-tuning, and agent workflows</p>
</li>
<li><p>Modular, API-first architecture for hosting permissionless AI</p>
</li>
</ul>
<h3 id="heading-what-hive-brings"><strong>What Hive Brings:</strong></h3>
<ul>
<li><p>Agent graph protocol for distributed collaboration</p>
</li>
<li><p>On-chain reputation and decision layers</p>
</li>
<li><p>Tooling to compose, spawn, and route between intelligent agents</p>
</li>
</ul>
<h3 id="heading-short-term-synergies"><strong>🔧 Short-Term Synergies</strong></h3>
<ul>
<li><p>Co-hosted agents that utilize Lilypad for compute and Hive for logic and coordination</p>
</li>
<li><p>Shared reference architecture for building decentralized AI swarms</p>
</li>
<li><p>Joint community initiatives and cross-ecosystem tutorials for developers</p>
</li>
</ul>
<h3 id="heading-mid-term-roadmap"><strong>🚀 Mid-Term Roadmap</strong></h3>
<ul>
<li><p>Native integration: deploy Hive agents directly to Lilypad via plug-and-play modules</p>
</li>
<li><p>Decentralized agent marketplaces with Lilypad-backed compute execution</p>
</li>
<li><p>Hive memory and reputation layers attached to Lilypad job metadata</p>
</li>
</ul>
<h2 id="heading-5-strategic-value-for-the-ecosystem"><strong>5. Strategic Value for the Ecosystem</strong></h2>
<p>This partnership unlocks tangible benefits:</p>
<ul>
<li><p>A truly modular decentralized AI pipeline: Agent Coordination + Scalable Compute</p>
</li>
<li><p>Support for long-lived, autonomous agents across use cases like DeSci, DePIN, and more</p>
</li>
<li><p>Examples of collective memory, agent training, and inference all in one ecosystem</p>
</li>
<li><p>Composable interfaces for researchers, builders, and institutions</p>
</li>
</ul>
<p>It signals that open-source AI is no longer just possible—it’s here, growing, and collaborative by design.</p>
<h2 id="heading-6-long-term-ecosystem-vision"><strong>6. Long-Term Ecosystem Vision</strong></h2>
<p>We believe the next evolution of AI is <em>cooperative intelligence</em>. Not one model to rule them all, but thousands of interconnected agents collaborating across ecosystems, each with their own incentives, provenance, and performance record.</p>
<p>Hive is building the brains. Lilypad is building the muscle. And together, we’re creating the nervous system of decentralized AI.</p>
<p>This partnership paves the way for:</p>
<ul>
<li><p>Decentralized cognition at scale</p>
</li>
<li><p>Composable, swarm-native AI architectures</p>
</li>
<li><p>Fair attribution, persistent memory, and cross-network coordination</p>
</li>
</ul>
<h2 id="heading-7-looking-ahead"><strong>7. Looking Ahead</strong></h2>
<ul>
<li><p>Co-branded reference examples and tutorials launching soon</p>
</li>
<li><p>Exploratory research into decentralized agent marketplaces</p>
</li>
<li><p>Hive x Lilypad developer integrations and pilot swarm deployments</p>
</li>
</ul>
<p>🌐 Learn more: <a target="_blank" href="https://hiveintelligence.xyz">hiveintelligence.xyz</a>🧠 Explore Lilypad: <a target="_blank" href="https://lilypad.tech">lilypad.tech</a></p>
<p>Let’s build the next generation of AI—modular, verifiable, and permissionless—together.</p>
<p>#CooperativeAI #Web3 #MultiAgentSystems #DePIN #LLMs #Lilypad #Hive</p>
]]></content:encoded></item><item><title><![CDATA[Resource Provider Beta Program]]></title><description><![CDATA[As the Lilypad Network pushes towards Mainnet launch, our team has been implementing significant protocol improvements for scalability and job consistency. As part of this work, the RP beta program has been launched as the next phase of the Lilybit i...]]></description><link>https://blog.lilypad.tech/resource-provider-beta-program</link><guid isPermaLink="true">https://blog.lilypad.tech/resource-provider-beta-program</guid><category><![CDATA[AI]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[Arbitrum]]></category><dc:creator><![CDATA[Logan Lentz]]></dc:creator><pubDate>Fri, 04 Apr 2025 20:35:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1743794847353/ba9e6665-29e9-439f-be80-52d233ee374a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As the Lilypad Network pushes towards Mainnet launch, our team has been implementing significant protocol improvements for scalability and job consistency. As part of this work, the RP beta program has been launched as the next phase of the Lilybit incentive program for Resource Providers (RPs) on Testnet.</p>
<p>RPs selected for the beta will get a Discord role unlocking access to a channel for beta communications. Only RPs that have been accepted into the beta can post resource offers to the Lilypad solver to run jobs.</p>
<p>The beta aims to replicate job based revenue RPs will earn on Mainnet. Nodes running jobs consistently each day will earn rewards based on hardware contributed to the network.</p>
<p><strong>Anyone can submit a request</strong> <a target="_blank" href="https://docs.lilypad.tech/lilypad/resource-providers/hardware-requirements#apply-to-our-closed-beta-resource-provider-program"><strong>through this form  to become a Resource Provider</strong></a>** in the beta program.**</p>
<h2 id="heading-verification-process">Verification Process</h2>
<p>To ensure Resource Providers admitted to the RP beta are operating as to be expected, test jobs will run on RPs at least once every hour. RP nodes must be able to process jobs successfully with a valid output. If your node has been removed from the RP beta and you’re not running jobs, please reach out to us (within the rp beta discord channel) if you haven't already been notified.</p>
<p>Nodes that don't meet our performance requirements will unfortunately be removed from the program and will not be eligible for rewards.</p>
<p><strong>More information about our base daily reward calculation is found below.</strong></p>
<h2 id="heading-tracking-your-resource-provider-on-lilypad">Tracking your Resource Provider on Lilypad</h2>
<p>To query the Solver for the current online nodes (over all resource offers)</p>
<p><code>curl https://solver-testnet.lilypad.tech/api/v1/resource_offers | jq '.[] .resource_provider'|sort|uniq</code></p>
<p>To query the Solver for current online nodes (not matched)</p>
<p><code>curl [https://solver-testnet.lilypad.tech/api/v1/resource_offers\\?not_matched\\=true](https://solver-testnet.lilypad.tech/api/v1/resource_offers%5C%5C?not_matched%5C%5C=true) | jq '.[] | .resource_provider'</code></p>
<p>To query the Solver for job offers in the queue</p>
<p><code>curl [https://solver-testnet.lilypad.tech/api/v1/job_offers\\?not_matched\\=true\\&amp;cancelled\\=false](https://solver-testnet.lilypad.tech/api/v1/job_offers%5C%5C?not_matched%5C%5C=true%5C%5C&amp;cancelled%5C%5C=false) | jq '.[] .job_offer.module.repo'</code></p>
<h2 id="heading-rewards-information">Rewards Information</h2>
<p>Daily Resource Provider rewards are determined by the hardware specifications of the RP as well as reliability/uptime. Rewards every month are split up between all the rp's. Currently, a RTX 4090 earns approximately 2,200 Lilybits per day (as of March 2025). <strong>RP points can be tracked</strong> <a target="_blank" href="https://rp-points.lilypad.tech/"><strong>here</strong></a><strong>.</strong></p>
<p><strong>It's important to note that during the beta program, RPs will not face slashing penalties regardless of whether they are accepted into or participate in the beta.</strong></p>
<p>It is also possible for Resource Providers's to earn extra BONUS Lilybits in a few situations such as</p>
<ul>
<li><p>troubleshooting critical problems with a Lilypad RP OR a Lilypad module</p>
</li>
<li><p>Tooling built to assist in management of a Lilypad RP</p>
</li>
<li><p>Tooling built around Lilypad module testing</p>
</li>
</ul>
<h2 id="heading-looking-forward">Looking Forward</h2>
<p>While we may encounter some challenges along the way, these efforts are crucial to building a reliable network for our Job Creators (Lilypad network users). Your patience and dedication as we transition to the next phase of Lilypad are deeply appreciated. Thank you for your continued support!</p>
]]></content:encoded></item><item><title><![CDATA[Build a chatbot with Lilypad]]></title><description><![CDATA[Lilypad recently released an open-beta Inference API aka “Anura”! It supports a range of powerful text-to-text models such as Llama3, DeepSeek and Qwen2.5. Anura offers a flexible and scalable way to integrate LLMs into applications, making it straig...]]></description><link>https://blog.lilypad.tech/build-a-chatbot-with-lilypad</link><guid isPermaLink="true">https://blog.lilypad.tech/build-a-chatbot-with-lilypad</guid><category><![CDATA[AI]]></category><category><![CDATA[LLaMa]]></category><category><![CDATA[Cryptocurrency]]></category><category><![CDATA[chatbot]]></category><category><![CDATA[llm]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Phil Billingsby]]></dc:creator><pubDate>Fri, 14 Mar 2025 21:22:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741799539407/a1e7a017-32b2-4e37-bdff-e8e6e7b9fffa.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad recently released an open-beta <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/inference-api">Inference API</a> aka “Anura”! It supports a range of powerful text-to-text models such as Llama3, DeepSeek and Qwen2.5. Anura offers a flexible and scalable way to integrate LLMs into applications, making it straight-forward to build AI-powered tools with real-time inference. Whether you're working on chatbots, content generation or research applications, the Lilypad API provides a scalable and efficient solution with ease-of-use API access.</p>
<p>In this guide, we will be building a simple yet functional chatbot using Next.js and Llama3, leveraging Anura's API to generate responses based on user input. By the end, you’ll have a working chatbot and a deeper understanding of how to integrate LLMs into your own applications.</p>
<h2 id="heading-available-models">Available models</h2>
<p>Our API currently supports a variety of powerful AI models tailored for different tasks, from code generation to multimodal reasoning. Whether you're looking for high-performance text generation or efficient scaling, we have models suited to your needs. We are always exploring and deploying new models to expand our offerings, so if there's a specific model you'd like to see added, let us know!</p>
<ul>
<li><p><strong>Qwen2.5 Coder</strong> (7B)</p>
</li>
<li><p><strong>DeepSeek R1</strong> (7B)</p>
</li>
<li><p><strong>LLaVA</strong> (7B)</p>
</li>
<li><p><strong>OpenThinker</strong> (7B)</p>
</li>
<li><p><strong>Phi-4</strong> (14B)</p>
</li>
<li><p><strong>Phi-4 Mini</strong> (3.8B)</p>
</li>
<li><p><strong>Qwen2.5</strong> (7B)</p>
</li>
<li><p><strong>DeepScaler</strong> (1.5B)</p>
</li>
<li><p><strong>Llama 3.1</strong> (8B)</p>
</li>
<li><p><strong>Mistral</strong> (7B)</p>
</li>
</ul>
<h2 id="heading-getting-started">Getting started</h2>
<p>Run the following command to create a new Next.js application:</p>
<pre><code class="lang-bash">npx create-next-app lilypad-chatbot
</code></pre>
<p>Navigate into the project directory:</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> lilypad-chatbot
</code></pre>
<p>Before moving on, let’s generate Anura API keys. Head to the <a target="_blank" href="https://anura.lilypad.tech/">Anura site</a> and sign up. Log in to your account and generate an API key.</p>
<p>Create a <code>.env</code> file in the project root and add your key to the <code>LILYPAD_API_TOKEN</code> variable:</p>
<pre><code class="lang-plaintext">LILYPAD_API_TOKEN=&lt;API_TOKEN&gt;
</code></pre>
<h2 id="heading-client-side-components">Client-side components</h2>
<p>The <code>Form</code> component serves as the interface between the user and the selected LLM. It manages user input, maintains conversation history, sends requests to the inference API and updates the UI with AI-generated responses.</p>
<p>At the heart of this component is conversation state management. Since LLMs perform better with contextual input, we retain the last six messages (<code>MAX_HISTORY = 6</code>). As this is more of a basic illustrative example, setting a max history prevents excessive memory usage while guaranteeing that the AI has enough recent context to generate relevant responses. Every time a user submits a new message, it’s added to the conversation state and the oldest messages are discarded if necessary.</p>
<p>Once the message is ready, it is sent to the API via a <code>POST</code> request. This request includes the current conversation history, allowing the LLM to generate responses in context. The function <code>extractContent()</code> processes this stream, filtering out incomplete or malformed data and extracting the final AI-generated text.</p>
<p>After processing the response, the conversation state updates and the UI refreshes to show the latest interaction. The input field is also cleared, allowing the user to continue the conversation.</p>
<p>Create a directory named <code>components</code> inside of <code>app</code>. Next we can create the form component we will be using as the main chat section. Create a file named <code>Form.js</code> and add the following code:</p>
<pre><code class="lang-javascript"><span class="hljs-string">"use client"</span>;
<span class="hljs-keyword">import</span> { useState } <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-keyword">const</span> MAX_HISTORY = <span class="hljs-number">6</span>; <span class="hljs-comment">// Limits conversation history to the last 6 messages to avoid excessive memory usage.</span>

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Form</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> [inputValue, setInputValue] = useState(<span class="hljs-string">""</span>); 
  <span class="hljs-keyword">const</span> [conversation, setConversation] = useState([]); <span class="hljs-comment">// Stores the chat history between the user and the AI.</span>
  <span class="hljs-keyword">const</span> [loading, setLoading] = useState(<span class="hljs-literal">false</span>); <span class="hljs-comment">// Tracks if the AI is currently generating a response.</span>

  <span class="hljs-keyword">const</span> handleSubmit = <span class="hljs-keyword">async</span> (e) =&gt; {
    e.preventDefault();
    setLoading(<span class="hljs-literal">true</span>);

    <span class="hljs-comment">// Keeps only the last MAX_HISTORY messages to maintain context but prevent unbounded growth.</span>
    <span class="hljs-keyword">const</span> updatedConversation = [
      ...conversation.slice(-MAX_HISTORY), <span class="hljs-comment">// Trims conversation history before adding new input.</span>
      { <span class="hljs-attr">role</span>: <span class="hljs-string">"user"</span>, <span class="hljs-attr">content</span>: inputValue }, <span class="hljs-comment">// Appends the latest user message.</span>
    ];

    <span class="hljs-keyword">try</span> {
      <span class="hljs-keyword">await</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">resolve</span> =&gt;</span> <span class="hljs-built_in">setTimeout</span>(resolve, <span class="hljs-number">500</span>)); <span class="hljs-comment">// Simulates a short delay to mimic request latency.</span>

      <span class="hljs-comment">// Sends conversation history to the backend for inference.</span>
      <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"/api/run-inference"</span>, {
        <span class="hljs-attr">method</span>: <span class="hljs-string">"POST"</span>,
        <span class="hljs-attr">headers</span>: { <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span> },
        <span class="hljs-attr">body</span>: <span class="hljs-built_in">JSON</span>.stringify({ <span class="hljs-attr">messages</span>: updatedConversation }),
      });

      <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> res.json();
      <span class="hljs-keyword">const</span> result = extractContent(data); <span class="hljs-comment">// Extracts the AI response from the streamed API output.</span>

      <span class="hljs-keyword">if</span> (!res.ok) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(data.error || <span class="hljs-string">"Failed to fetch response"</span>);

      <span class="hljs-comment">// Updates conversation history, ensuring the user message and AI response stay within MAX_HISTORY.</span>
      setConversation([...updatedConversation, { <span class="hljs-attr">role</span>: <span class="hljs-string">"assistant"</span>, <span class="hljs-attr">content</span>: result }]);
      setInputValue(<span class="hljs-string">""</span>); <span class="hljs-comment">// Clears input field after submission.</span>
    } <span class="hljs-keyword">catch</span> (error) {
      <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"Error:"</span>, error.message);
      alert(<span class="hljs-string">`Error: <span class="hljs-subst">${error.message}</span>`</span>);
    }

    setLoading(<span class="hljs-literal">false</span>);
  };

  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">extractContent</span>(<span class="hljs-params">apiResponse</span>) </span>{
    <span class="hljs-keyword">const</span> { text } = apiResponse;

    <span class="hljs-comment">// Split response by "data: " but remove empty entries</span>
    <span class="hljs-keyword">const</span> jsonStrings = text.split(<span class="hljs-string">"data: "</span>).filter(<span class="hljs-function">(<span class="hljs-params">entry</span>) =&gt;</span> {
      <span class="hljs-keyword">try</span> {
        <span class="hljs-keyword">const</span> jsonData = <span class="hljs-built_in">JSON</span>.parse(entry.trim());
        <span class="hljs-comment">// Ensure it's the assistant's message by checking for 'choices'</span>
        <span class="hljs-keyword">return</span> jsonData.choices?.[<span class="hljs-number">0</span>]?.message?.content;
      } <span class="hljs-keyword">catch</span> {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>; <span class="hljs-comment">// Skip invalid JSON entries</span>
      }
    });

    <span class="hljs-keyword">if</span> (jsonStrings.length === <span class="hljs-number">0</span>) <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;

    <span class="hljs-comment">// Get the last valid assistant response</span>
    <span class="hljs-keyword">const</span> finalData = <span class="hljs-built_in">JSON</span>.parse(jsonStrings[jsonStrings.length - <span class="hljs-number">1</span>].trim());
    <span class="hljs-keyword">return</span> finalData.choices[<span class="hljs-number">0</span>].message.content;
  }

  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"flex flex-col mx-auto text-center items-center w-2/3 justify-center bg-black text-white"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-3xl mb-4"</span>&gt;</span>llama3.1:8b<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full p-6 border border-white rounded-lg bg-gray-900 text-left"</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-lg font-semibold mb-2"</span>&gt;</span>Conversation:<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"p-3 bg-gray-800 border border-gray-600 rounded-lg w-full text-white"</span>&gt;</span>
          {conversation.length === 0 ? (
            <span class="hljs-tag">&lt;<span class="hljs-name">p</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"text-gray-400"</span>&gt;</span>No messages yet. Ask something!<span class="hljs-tag">&lt;/<span class="hljs-name">p</span>&gt;</span>
          ) : (
            conversation.map((msg, index) =&gt; (
              <span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">key</span>=<span class="hljs-string">{index}</span> <span class="hljs-attr">className</span>=<span class="hljs-string">{</span>`<span class="hljs-attr">mb-2</span> ${<span class="hljs-attr">msg.role</span> === <span class="hljs-string">"user"</span> ? "<span class="hljs-attr">text-blue-300</span>" <span class="hljs-attr">:</span> "<span class="hljs-attr">text-green-300</span>"}`}&gt;</span>
                <span class="hljs-tag">&lt;<span class="hljs-name">strong</span>&gt;</span>{msg.role === "user" ? "You:" : "AI:"}<span class="hljs-tag">&lt;/<span class="hljs-name">strong</span>&gt;</span> {msg.content}
              <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
            ))
          )}
        <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span>

      <span class="hljs-tag">&lt;<span class="hljs-name">form</span> <span class="hljs-attr">onSubmit</span>=<span class="hljs-string">{handleSubmit}</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"mx-auto w-full p-6 border border-white rounded-lg mt-4"</span>&gt;</span>
        {/* Input field for user messages */}
        {!loading ? (
          <span class="hljs-tag">&lt;<span class="hljs-name">input</span>
            <span class="hljs-attr">type</span>=<span class="hljs-string">"text"</span>
            <span class="hljs-attr">value</span>=<span class="hljs-string">{inputValue}</span>
            <span class="hljs-attr">onChange</span>=<span class="hljs-string">{(e)</span> =&gt;</span> setInputValue(e.target.value)}
            className="w-full p-3 bg-black border border-white rounded-lg text-white placeholder-gray-400 focus:outline-none focus:ring-2 focus:ring-white"
            placeholder="Ask me anything..."
            required
          /&gt;
        ) : (
          // Displays a loading animation while waiting for the response. You can use whatever animation you want
          <span class="hljs-tag">&lt;<span class="hljs-name">img</span> <span class="hljs-attr">src</span>=<span class="hljs-string">"lp-logo.svg"</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"w-6 mx-auto animate-spin"</span> /&gt;</span>
        )}

        {/* Submit button */}
        <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
          <span class="hljs-attr">type</span>=<span class="hljs-string">"submit"</span>
          <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full mt-4 p-3 font-semibold border border-white rounded-lg transition-all duration-200 ease-in-out bg-white text-black hover:bg-gray-300 disabled:opacity-50 disabled:cursor-not-allowed"</span>
          <span class="hljs-attr">disabled</span>=<span class="hljs-string">{loading}</span>
        &gt;</span>
          {loading ? "Thinking..." : "Submit"}
        <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">form</span>&gt;</span>

      {/* Button to clear conversation history */}
      {conversation.length &gt; 0 &amp;&amp; (
        <span class="hljs-tag">&lt;<span class="hljs-name">button</span>
          <span class="hljs-attr">className</span>=<span class="hljs-string">"w-full mt-4 p-3 font-semibold border border-white rounded-lg transition-all duration-200 ease-in-out bg-white text-black hover:bg-gray-300"</span>
          <span class="hljs-attr">onClick</span>=<span class="hljs-string">{()</span> =&gt;</span> {
            setConversation([]); // Resets conversation history.
          }}
        &gt;
          Try Again
        <span class="hljs-tag">&lt;/<span class="hljs-name">button</span>&gt;</span>
      )}
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
}
</code></pre>
<p>The <code>page.js</code> file serves as the entry point for the application, rendering the main chat interface. It imports the <code>Form</code> component from the <code>components</code> directory and displays it inside a centered container.</p>
<p>Inside of <code>page.js</code>, add the code below:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> Form <span class="hljs-keyword">from</span> <span class="hljs-string">"./components/Form"</span>;
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Home</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span> <span class="hljs-attr">className</span>=<span class="hljs-string">"items-center justify-items-center min-h-screen p-8 pb-20 gap-16 sm:p-20 font-[family-name:var(--font-geist-sans)]"</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">Form</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
  );
}
</code></pre>
<h2 id="heading-server-side-component">Server-side component</h2>
<p>The <code>route.js</code> file acts as the backend handler for processing user queries and interacting with the Lilypad API. It receives messages from the frontend, sends them to the LLM, processes the streamed response and then returns the final output to the client.</p>
<p>Unlike a standard API call that returns a complete response at once, the Lilypad API responds with a continuous stream of data. This means we need to handle incoming chunks incrementally, decoding them as they arrive. The <a target="_blank" href="http://reader.read"><code>reader.read</code></a><code>()</code> loop ensures that the entire response is collected piece by piece before returning it to the client.</p>
<p>Each request carries the current conversation history (context for the LLM), allowing the model to generate responses with relevant context. The <code>temperature</code> and <code>max_tokens</code> parameters control how the model behaves. The <code>temperature</code> adjusts randomness, while <code>max_tokens</code> ensures the response doesn’t exceed a set length. For more information on the valid parameters for the API, please refer to the <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/inference-api#valid-options-parameters-and-default-values">Lilypad docs</a>.</p>
<p>Once the response is fully received and processed, it is returned to the client. This allows the <code>Form</code> component to update the UI dynamically, creating a smooth and responsive chat experience.</p>
<p>Inside of the <code>app</code> directory, create an <code>api</code> folder, and within it, a <code>run-inference</code> directory. In this directory, create a <code>route.js</code> file. This file will define the server-side API endpoint responsible for handling chat requests and communicating with the API for AI inference.</p>
<p>The <code>route.js</code> function will:</p>
<ul>
<li><p>Extract the conversation history from the request.</p>
</li>
<li><p>Construct the inference request with model parameters.</p>
</li>
<li><p>Send the request to the Lilypad API for processing.</p>
</li>
<li><p>Stream the response chunk by chunk, ensuring efficient handling of large responses.</p>
</li>
<li><p>Return the processed response to the client.</p>
</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { NextResponse } <span class="hljs-keyword">from</span> <span class="hljs-string">"next/server"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">POST</span>(<span class="hljs-params">req</span>) </span>{
  <span class="hljs-keyword">try</span> {
    <span class="hljs-keyword">const</span> { messages } = <span class="hljs-keyword">await</span> req.json();
    <span class="hljs-keyword">const</span> API_URL = <span class="hljs-string">"https://anura-testnet.lilypad.tech/api/v1/chat/completions"</span>;
    <span class="hljs-keyword">const</span> API_TOKEN = process.env.LILYPAD_API_TOKEN;

    <span class="hljs-keyword">const</span> requestBody = {
      <span class="hljs-attr">model</span>: <span class="hljs-string">"llama3.1:8b"</span>, <span class="hljs-comment">// Can use any available model on the API</span>
      messages, <span class="hljs-comment">// Passes the conversation history to maintain context</span>
      <span class="hljs-attr">max_tokens</span>: <span class="hljs-number">2048</span>, <span class="hljs-comment">// Caps response length to prevent runaway token usage</span>
      <span class="hljs-attr">temperature</span>: <span class="hljs-number">0.7</span>, <span class="hljs-comment">// Controls randomness—higher values make responses more diverse</span>
    };

    <span class="hljs-keyword">const</span> response = <span class="hljs-keyword">await</span> fetch(API_URL, {
      <span class="hljs-attr">method</span>: <span class="hljs-string">"POST"</span>,
      <span class="hljs-attr">headers</span>: {
        <span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>,
        <span class="hljs-string">"Accept"</span>: <span class="hljs-string">"text/event-stream"</span>, <span class="hljs-comment">// Enables streaming to return partial results as they are generated</span>
        <span class="hljs-string">"Authorization"</span>: <span class="hljs-string">`Bearer <span class="hljs-subst">${API_TOKEN}</span>`</span>,
      },
      <span class="hljs-attr">body</span>: <span class="hljs-built_in">JSON</span>.stringify(requestBody),
    });

    <span class="hljs-keyword">if</span> (!response.ok) {
      <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(<span class="hljs-string">`API request failed with status <span class="hljs-subst">${response.status}</span>`</span>);
    }

    <span class="hljs-keyword">const</span> reader = response.body.getReader();
    <span class="hljs-keyword">let</span> result = <span class="hljs-string">""</span>;

    <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>) {
      <span class="hljs-keyword">const</span> { done, value } = <span class="hljs-keyword">await</span> reader.read();
      <span class="hljs-keyword">if</span> (done) <span class="hljs-keyword">break</span>;
      result += <span class="hljs-keyword">new</span> TextDecoder().decode(value); <span class="hljs-comment">// Decodes streamed chunks as they arrive</span>
    }

    <span class="hljs-keyword">return</span> NextResponse.json({ <span class="hljs-attr">text</span>: result });

  } <span class="hljs-keyword">catch</span> (error) {
    <span class="hljs-built_in">console</span>.error(<span class="hljs-string">"API Route Error:"</span>, error);
    <span class="hljs-keyword">return</span> NextResponse.json({ <span class="hljs-attr">error</span>: error.message }, { <span class="hljs-attr">status</span>: <span class="hljs-number">500</span> });
  }
}
</code></pre>
<h2 id="heading-running-the-chatbot">Running the chatbot</h2>
<p>Now that everything is set up, it's time to run the chatbot and test the interaction with the API. Follow these steps to start the application and begin chatting with the AI:</p>
<h3 id="heading-start-the-development-server">Start the Development Server</h3>
<p>Run <code>npm run dev</code> to start your local server.</p>
<h3 id="heading-interacting-with-the-chatbot">Interacting with the Chatbot</h3>
<p>Open <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a> in your browser and type a message in the input field and press Submit.</p>
<p>The chatbot will:</p>
<ul>
<li><p>Send your input to the API.</p>
</li>
<li><p>Process the AI’s response and display it in the chat window.</p>
</li>
<li><p>Maintain a conversation history, ensuring relevant context in responses.</p>
</li>
</ul>
<p>Here is an example of how the chatbot interacts in a conversation:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741796076898/8a0232ef-f804-42c0-98a1-9294459a8212.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-whats-next">What’s next?</h2>
<p>That’s it! With this guide, you’ve successfully built a basic chatbot using Next.js and Lilypad’s Anura API, integrating real-time AI responses into your app. You’ve learned how to:</p>
<ul>
<li><p>Set up a Next.js project and configure Anura API keys.</p>
</li>
<li><p>Build a client-side chat interface that maintains conversation history.</p>
</li>
<li><p>Create a server-side API route to process user input and stream AI responses.</p>
</li>
</ul>
<p>This is just the beginning! You can extend this chatbot by:</p>
<ul>
<li><p>Switching models to explore different capabilities.</p>
</li>
<li><p>Adding UI enhancements, such as message streaming instead of displaying responses all at once.</p>
</li>
<li><p>Improving memory handling, such as storing chat history in a database for long-term context.</p>
</li>
</ul>
<p>For more details on available models and additional API features, check out the <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/inference-api"><strong>Lilypad API documentation</strong></a>. If you have any questions or ideas for improvements, feel free to reach out to the the in the <a target="_blank" href="https://discord.com/invite/tnE8SMmsxW">Lilypad Discord</a>.</p>
<p>You can check out the source code for <a target="_blank" href="https://github.com/PBillingsby/lilypad-llama3-chatbot">this example here</a>!</p>
]]></content:encoded></item><item><title><![CDATA[What is the Lilypad Decentralized Compute Network?]]></title><description><![CDATA[Lilypad is a decentralized platform designed to democratize access to GPU and high-performance compute—resources essential for modern AI and ML tasks. We are striving to make it possible for anyone, anywhere to be able to access a network of compute ...]]></description><link>https://blog.lilypad.tech/what-is-the-lilypad-decentralized-compute-network</link><guid isPermaLink="true">https://blog.lilypad.tech/what-is-the-lilypad-decentralized-compute-network</guid><category><![CDATA[High Performance Computing ]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Lindsay Walker]]></dc:creator><pubDate>Wed, 19 Feb 2025 00:38:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1739920737515/8944a404-c4d3-4d38-8c13-68d68343ba2a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is a decentralized platform designed to democratize access to GPU and high-performance compute—resources essential for modern AI and ML tasks. We are striving to make it possible for anyone, anywhere to be able to access a network of compute resources, crowdsources from a community of individual providers and datacenters, to run complex jobs that may not be otherwise affordable and accessible due to a lack of powerful hardware.</p>
<p>Typically, smaller startups, academic researchers, and indeed any organization without funding and resources to set up or purchase compute infrastructure. Demand for HPC often exceeds capacity by over 300% [1]. Lilypad will make it trivial to not only access these resources, but to also enable the flexibility and customizability to access resources on-demand, for any task. This includes certain High Performance Compute (HPC) tasks. Lilypad will take on specific types of tasks, such as large-scale batch inference, that require considerable infrastructure, but aren’t necessarily the best use of a typical HPC network’s resource. This type of task doesn’t require fast networking speeds between machines, and can be outsources to a decentralized system like Lilypad.</p>
<h3 id="heading-traditional-hpc-networks">Traditional HPC Networks</h3>
<p>High Performance Compute is typically a system where computer hardware is joined together in a network that has high bandwidth (400Gbps transfer), low latency, high efficiency, and has specialized hardware. Traditionally, high performance compute networks were a very specialized and coordinated set of hardware and configurations that were  localized to one or a few geographical locations. Ten years ago, HPC networks were made up of tens or hundreds of computers networked together. Today, you can get the same GPU compute power one used to have to source from an HPC network from a single piece of hardware for less than $10,000.</p>
<p>HPC Networks enable a level of computation that isn’t possible on single machines. Use cases include genomic research (tasks such as protein folding), weather forecasting, complex scientific and engineering simulations, and AI/ML Model training that require massive parallel processing over what can amount to petabytes of data.</p>
<h3 id="heading-the-lilypad-approach">The Lilypad Approach</h3>
<p>GPUs such as NVIDIA’s A100 and H100 or the AMD Instinct MI250 can handle high performance tasks  like complex deep learning workloads. Lilypad empowers anyone with idle GPU capacity to participate in  a global compute network, unlocking affordable high-performance resources for startups, researchers, and innovators—locking users into the fees and restrictions on models like traditional cloud providers.</p>
<p>The difference from traditional HPCs is in how Lilypad is tackling this problem by creating a decentralized, open network, meaning we aim to accommodate a wide range of different types of machines as providers, and build a platform that can coordinate and accommodate these in a way that makes it possible to do these compute- and data- intensive tasks. Using advances such as containerization and better distributed systems tooling, it is now possible to coordinate a worldwide high performance compute network.</p>
<h1 id="heading-the-lilypad-network">The Lilypad Network</h1>
<p>One of the unique things about the LIlypad network is that it is a decentralized and open network. Any compute provider, or node,  at or above a defined performance threshold  can join and be paid (In our native LILY currency) for running jobs sent to the network.</p>
<p>For our MVP, we are prioritizing computers with enough GPU (as well as CPU and memory) that can run one-shot inference jobs, AI agents that use chain-of-thought or sequential prompting, and customized high demand open source models.</p>
<h2 id="heading-decentralizing-certain-hpc-tasks">Decentralizing Certain HPC Tasks</h2>
<p>The Lilypad Network is uniquely suited to certain types of HPC tasks. Because it is a decentralized network, there are certain limitations in terms of latency. In other words, one shouldn’t expect the millisecond responses you get with AI chatbots, but that doesn’t mean there aren’t hundreds of applications that Lilypad is well suited for. Just a few  examples of what our network can be used for:</p>
<ul>
<li><p>Provide resources for pharmaceutical and biological research for academics who are new to GPU processing and need access to the newest models</p>
</li>
<li><p>Batch data processing for financial analysis, and large scale data processing</p>
</li>
<li><p>RAG-driven model customization - create customized and secure modules that can be connected to business’ private documentation datasets to generate business documents</p>
</li>
<li><p>Processing of edge and IoT device data</p>
</li>
<li><p>Publish fine-tuned models for specific applications that can be quickly run via an API endpoint.</p>
</li>
</ul>
<h3 id="heading-enabling-small-project-growth">Enabling Small Project Growth</h3>
<p>Many new initiatives , whether they are emerging startups, academic research groups, and innovative projects often face a choice: invest significant capital in building compute infrastructure  or pay premium rates to cloud provider. Cloud providers enable a quick and easy setup of resources needed, and scalability for when a project’s scope and computing needs grow. The drawback here is the significant cost of these remote options, as well as vendor lock-in (the cost and difficulties that are designed into cloud systems to prevent users from switching to other services). By switching away from large, incumbent cloud providers, projects and companies can save tens to hundreds of thousands [2] on operating costs, depending on the size of their computing needs.</p>
<p>Because of the way that we are designing our Module Marketplace, it is possible for users who have custom workflows and jobs to include their job as a ‘module’ that can be run on Lilypad. This means even censored AI models, models unavailable elsewhere, or models that have been customized with fine tuning (or RAG workflows with custom datasets) can easily be accessed, on-demand, from an API endpoint, in a serverless manner. Traditionally jobs like this are only an option if you take on the cost and time of paying for and configuring your own hardware.</p>
<h3 id="heading-meeting-specific-needs-of-the-ai-era">Meeting Specific Needs of the AI Era</h3>
<p>What the Lilypad network is doing is enabling any user with needs for certain high performance compute tasks to access it at a much more competitive rate than major cloud providers offer. We also remove time commitments and obfuscate the overhead work of setting up infrastructure, which are real costs to any organzation accessing compute from a GPU rental marketplace [2]. Our platform delivers serverless, on-demand AI compute and model hosting, enabling rapid deployment and experimentation without the burdens of traditional infrastructure. Anyone can add, test, and run their model via an endpoint, so you aren’t restricted to a limited set of models or compute jobs in the way other similar services are, and you pay on-demand, for only the compute you use, instead of monthly for access to services.</p>
<p>There are certain latency limitations with Lilypad, and the network isn’t able to specify the exact hardware in the same way a single-owner HPC network can. It does, however, provide distributed container orchestration, job scheduling, and management with <a target="_blank" href="https://docs.bacalhau.org/">Bacalhau</a>, and has the ability to support massive parallel processing in Docker containers across a peer-to-peer network of hardware. This is why Lilypad has started by targeting a few distinct use cases such as custom AI Inference tasks, and is exploring other uses such as RAG, Fine Tuning, and supporting Agentic AI workflows.</p>
<h3 id="heading-the-lilypad-value">The Lilypad Value</h3>
<p>The value of what  Lilypad is creating lies in both the customizable, on-demand access to compute, but also the competitive marketplace that results in gains by parties on either end. The resource providers who would otherwise be unable to monetize idle compute power have the opportunity to make the most of their infrastructure investment, and those running jobs benefit from a marketplace where they pick from pre-configured jobs and can bid on lower priced resources. Our open market means job creators will benefit from competitive pricing, as compute providers compete to win jobs to monetize their idle compute. Moving from traditional cloud compute providers means end users can get much closer to on-prem costs, with savings of up to 25%-66% [2], depending on the scale of a project.</p>
<p>Plus, with the Lilypad Module Marketplace, users aren’t locked in to only using certain models. Module Creators can easily containerize, configure, test, and release any model they need on the  Lilypad network, then access the job on-demand with an API endpoint. Other API-endpoint services give users and builders a limited selection, which inevitably forces a choice between building something customized to your needs, or, setting up or renting your own infrastructure.</p>
<p>We are also fostering the growth of a massive library of different useful models and compute jobs on our network, by adding in an incentive to our protocol that enables module creators to earn a small piece of the fee for a job each time their module is run. An incentivized open marketplace naturally will allow these creators in our ecosystem to build at the speed of open source.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Lilypad provides the best for all those in our network: Monetization of idle power  for those who contribute compute, and a cost savings  to end users, while maintaining the advantages of scalability and ease of use provided by cloud computing companies, as well as the ability of those who want to bring models to our marketplace to earn for their contribution.</p>
<p>Lilypad’s mission is to rapidly and affordably bring high-performance AI and ML capabilities to innovators worldwide, leveraging the power of decentralized compute. We are harnessing the power of open communities and crypto-incentivization to create network-effect growth, while meeting the constantly evolving AI and high performance compute landscape.</p>
<hr />
<h1 id="heading-references">References</h1>
<ol>
<li><p>AWS Intel. <em>Challenging the Barriers to High Performance Computing in the Cloud.</em> October 2019. <a target="_blank" href="https://d1.awsstatic.com/HPC2019/Challenging-Barriers-to-HPC-in-the-cloud-Oct2019.pdf">https://d1.awsstatic.com/HPC2019/Challenging-Barriers-to-HPC-in-the-cloud-Oct2019.pdf</a></p>
</li>
<li><p>Justin Garrison. <em>The New Stack. Cloud vs. On-Prem: Comparing Long-Term Costs</em>. November 2024. <a target="_blank" href="https://thenewstack.io/cloud-vs-on-prem-comparing-long-term-costs/">https://thenewstack.io/cloud-vs-on-prem-comparing-long-term-costs/</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[DeepSeek-R1 7b is Now a Module on the Lilypad Network!]]></title><description><![CDATA[Big news for developers and researchers! DeepSeek-R1 7b and DeepSeek-R1 1.5b, powerful open source reasoning models, are now running as modules on the Lilypad Network! By using these modules in AI workflows, developers can be sure their data is not b...]]></description><link>https://blog.lilypad.tech/deepseek-r1-7b-is-now-a-module-on-the-lilypad-network</link><guid isPermaLink="true">https://blog.lilypad.tech/deepseek-r1-7b-is-now-a-module-on-the-lilypad-network</guid><category><![CDATA[Deepseek]]></category><category><![CDATA[AI]]></category><category><![CDATA[aitools]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[modules]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Sam Ceja]]></dc:creator><pubDate>Wed, 12 Feb 2025 00:18:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738283252604/f1677b89-c731-4236-aaa7-696e521b1355.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Big news for developers and researchers! <a target="_blank" href="https://github.com/rhochmayr/ollama-deepseek-r1-7b">DeepSeek-R1 7b</a> and <a target="_blank" href="https://github.com/narbs91/lilypad-ollama-deepseek-r1-1-5b">DeepSeek-R1 1.5b</a>, powerful open source reasoning models, are now running as modules on the Lilypad Network! By using these modules in AI workflows, developers can be sure their data is not being collected by a large centralized entity while using decentralized infrastructure to run the high performance DeepSeek R1 modules.</p>
<p>DeepSeek R1 on Lilypad is more than just a technical milestone—it’s a step toward making advanced, open source AI accessible to everyone, powered by a decentralized network.</p>
<p>Whether you’re building agent workflows, generating complex insights, or building AI-driven applications, DeepSeek R1 on Lilypad opens up a world of possibilities!</p>
<h3 id="heading-why-choose-lilypad-for-deepseek-r1"><strong>Why Choose Lilypad for DeepSeek-R1?</strong></h3>
<p>By running DeepSeek-R1 on Lilypad, you’re leveraging the power of decentralized compute for faster, more scalable AI workflows. With DeepSeek R1 on Lilypad, developers can scale to running large amounts of inference jobs at once, all while reducing reliance on costly centralized servers.</p>
<ul>
<li><p><strong>Frictionless AI Access:</strong> From installation to execution, Lilypad makes it simple to deploy and manage AI models like DeepSeek R1.</p>
</li>
<li><p><strong>Cost-Effective Compute:</strong> Lilypad's decentralized network saves you money while delivering the speed and reliability you need.</p>
</li>
<li><p><strong>Developer-First Design:</strong> Whether you’re a seasoned AI engineer or just getting started, Lilypad’s tools and APIs are built to empower your workflow.</p>
</li>
</ul>
<h2 id="heading-get-started"><strong>Get Started</strong></h2>
<p><strong>Using the Lilypad CLI</strong></p>
<p>Install the Lilypad <a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-testnet/install-run-requirements">CLI</a> (directions below):</p>
<pre><code class="lang-plaintext"># Detect your machine's architecture and set it as $OSARCH
OSARCH=$(uname -m | awk '{if ($0 ~ /arm64|aarch64/) print "arm64"; else if ($0 ~ /x86_64|amd64/) print "amd64"; else print "unsupported_arch"}') &amp;&amp; export OSARCH;
# Detect your operating system and set it as $OSNAME
OSNAME=$(uname -s | awk '{if ($1 == "Darwin") print "darwin"; else if ($1 == "Linux") print "linux"; else print "unsupported_os"}') &amp;&amp; export OSNAME;
# Download the latest production build
curl https://api.github.com/repos/lilypad-tech/lilypad/releases/latest | grep "browser_download_url.*lilypad-$OSNAME-$OSARCH-cpu" | cut -d : -f 2,3 | tr -d \" | wget -i - -O lilypad

# Make Lilypad executable and install it
chmod +x lilypad
sudo mv lilypad /usr/local/bin/lilypad
</code></pre>
<p>Run the DeepSeek R1 Module (<em>using the Lilypad DemoNet testing environment</em>):</p>
<pre><code class="lang-plaintext">lilypad run --network demonet --web3-private-key b3994e7660abe5f65f729bb64163c6cd6b7d0b1a8c67881a7346e3e8c7f026f5 github.com/rhochmayr/ollama-deepseek-r1-7b:1.0.0 -i Prompt="What is the future of open-source AI"
</code></pre>
<p><strong>Developer tooling for running inference jobs on Lilypad</strong></p>
<ul>
<li><p><strong>Build an Agent or App using the Local CLI Wrapper for Inference:</strong> Both DeepSeek modules can run with the <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/js-cli-wrapper-local">JS-based CLI Wrapper</a>, offering a streamlined way to integrate modules into your workflows. Although more technical overhead for developers, the local cli wrapper gives developers the option to run their own “Job Creator” api reducing reliance on a hosted API from Lilypad creating further decentralization in the AI workflow. See an example for building a front end on top of the JS CLI Wrapper <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/running-lilypad-in-a-front-end">here</a>!</p>
</li>
<li><p><strong>Hosted API (Beta - coming soon!):</strong> For those who prefer a plug-and-play solution, our API is in beta and will be available for the community to use soon—run models effortlessly without additional setup.</p>
</li>
</ul>
<h2 id="heading-conclusion"><strong>Conclusion:</strong></h2>
<p>DeepSeek R1 on Lilypad is your gateway to running open source models with decentralized AI compute. Start building today and experience the freedom, scalability, and power of running AI on Lilypad.</p>
<h2 id="heading-join-the-lilypad-community"><strong>Join the Lilypad Community</strong></h2>
<p>We’re committed to building a community of developers and innovators who want to push the boundaries of AI and decentralized compute. Join our <a target="_blank" href="https://discord.gg/tnE8SMmsxW">Discord community</a> to connect, share, and grow with others.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad x Gateway]]></title><description><![CDATA[Lilypad is thrilled to announce a landmark partnership with Gateway, the team behind the Shared Private State, to build the private, encrypted foundation of decentralized AI.
This isn’t just a bridge between networks — it’s an encrypted tunnel to the...]]></description><link>https://blog.lilypad.tech/lilypad-x-gateway</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-x-gateway</guid><category><![CDATA[AI]]></category><category><![CDATA[privacy]]></category><category><![CDATA[fhe]]></category><dc:creator><![CDATA[Alison Haire]]></dc:creator><pubDate>Sat, 08 Feb 2025 13:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1746782743543/b16fe21c-bc7b-441d-b96f-0168274cc366.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Lilypad is thrilled to announce a landmark partnership with Gateway, the team behind the Shared Private State, to build the private, encrypted foundation of decentralized AI.</p>
<p>This isn’t just a bridge between networks — it’s an encrypted tunnel to the future. Together, we unlock programmable, verifiable, privacy-preserving AI compute at scale.</p>
<hr />
<h2 id="heading-details-of-partnership-synergistic-strengths">🌟 Details of Partnership - Synergistic Strengths</h2>
<p>Lilypad provides:</p>
<ul>
<li><p>A decentralized GPU marketplace for running inference, model training, and agentic workflows</p>
</li>
<li><p>Verifiable, modular AI-first compute layer with blockchain-coordinated payment rails and provenance</p>
</li>
<li><p>Composable job architecture to plug into any system — agents, dApps, research pipelines, L2s</p>
</li>
</ul>
<p>Gateway delivers:</p>
<ul>
<li><p>Encrypted compute powered by their MPC-based Shared Private State</p>
</li>
<li><p>Programmable cryptography for developers through SDKs in Go, JavaScript, and Rust</p>
</li>
<li><p>A full Privacy Enhancing Technologies (PETs) Marketplace including garbled circuits, proxy re-encryption, TEEs, and homomorphic encryption</p>
</li>
</ul>
<p>Combined, we form a stack that makes confidential decentralized AI usable, scalable, and programmable.</p>
<hr />
<h2 id="heading-practical-synergies-short-term">🔧 Practical Synergies (Short-Term)</h2>
<p>What’s possible today:</p>
<ul>
<li><p>AI models deployed on Lilypad can now run encrypted through Gateway’s Shared Private State</p>
</li>
<li><p>PETs like proxy re-encryption and garbled circuits now plug into live GPU compute workflows</p>
</li>
<li><p>Devs can integrate Gateway SDKs with Lilypad job endpoints to spin up fully encrypted training and inference pipelines</p>
</li>
</ul>
<p>Use cases already underway:</p>
<ul>
<li><p>Federated healthcare AI where patient data never leaves encrypted state — powered by Gateway’s encryption and Lilypad’s GPUs</p>
</li>
<li><p>Confidential financial model training — fraud detection and risk scoring across encrypted, multi-institution data</p>
</li>
<li><p>Private enterprise AI — companies retain full IP privacy while accessing Lilypad’s distributed compute grid</p>
</li>
</ul>
<hr />
<h2 id="heading-roadmap-collaboration-mid-term">🚀 Roadmap Collaboration (Mid-Term)</h2>
<p>Over the next few months:</p>
<ul>
<li><p>We’ll co-launch a POC showing encrypted LLM inference on Lilypad using Gateway’s Garbled Circuit SDK</p>
</li>
<li><p>Gateway’s PET marketplace will be integrated into Lilypad’s developer documentation and workflow guides</p>
</li>
<li><p>Builders will get co-branded examples, tutorials, and grant support for privacy-preserving AI applications</p>
</li>
</ul>
<p>Future milestones:</p>
<ul>
<li><p>Composable privacy primitives layered directly into Lilypad’s job architecture</p>
</li>
<li><p>zk-like encrypted compute receipts for verifiability + confidentiality</p>
</li>
<li><p>Gateway PET modules powering AI agents, simulations, and multi-party compute coordination</p>
</li>
</ul>
<p>Together, we’re building the secure compute core of the decentralized AI stack.</p>
<hr />
<h2 id="heading-strategic-value-why-it-matters">📈 Strategic Value - Why It Matters</h2>
<p>This is a strategic unlock for DeAI:</p>
<ul>
<li><p>Encryption + Compute = Confidential Coordination</p>
</li>
<li><p>Gateway enables Lilypad jobs to run on sensitive data without leaking inputs, outputs, or model logic</p>
</li>
<li><p>From healthcare to finance to agentic infrastructure, privacy is no longer a blocker — it’s now programmable</p>
</li>
</ul>
<p>Benefits to the Web3 and OSS ecosystem:</p>
<ul>
<li><p>PETs + Compute workflows that devs can integrate without complex infra</p>
</li>
<li><p>Launchpad for a new generation of DePIN x AI applications with built-in privacy</p>
</li>
<li><p>Elevates Lilypad’s position from just verifiable compute to verifiable + confidential compute</p>
</li>
</ul>
<p>For developers:</p>
<ul>
<li><p>One-click access to encrypted job execution</p>
</li>
<li><p>Modular SDKs in the language of your choice</p>
</li>
<li><p>Ability to deploy next-gen AI agents without exposing model logic or user data</p>
</li>
</ul>
<hr />
<h2 id="heading-long-term-vision-infrastructure-for-encrypted-autonomous-intelligence">🌍 Long-Term Vision – Infrastructure for Encrypted Autonomous Intelligence</h2>
<p>Lilypad and Gateway are aligned in the belief that decentralized AI must be both scalable and secure.</p>
<p>This partnership advances:</p>
<ul>
<li><p>A Layer 0 for DeAI where encrypted state, compute liquidity, and model logic meet</p>
</li>
<li><p>A framework for encrypted model marketplaces, agent networks, and shared inference layers</p>
</li>
<li><p>A decentralized stack for programmable privacy, economic coordination, and agent autonomy</p>
</li>
</ul>
<p>This sets a new precedent for ecosystem-aligned infrastructure: not just decentralized — encrypted, composable, and unstoppable.</p>
<hr />
<h2 id="heading-whats-next">📅 What’s Next</h2>
<p>Immediate:</p>
<ul>
<li><p>Teams are co-building a proof-of-concept using Garbled Circuits + Lilypad compute</p>
</li>
<li><p>Gateway SDKs now available for encrypted data vault and PET development</p>
</li>
<li><p>Shared roadmap, dev documentation and example jobs publishing soon</p>
</li>
</ul>
<p>Medium-term:</p>
<ul>
<li><p>PET Marketplace + Lilypad compute integrated via SDK and UI</p>
</li>
<li><p>Partner developer campaigns to co-fund new private DeAI primitives</p>
</li>
<li><p>Joint GTM for secure agent frameworks and confidential compute ecosystems</p>
</li>
</ul>
<hr />
<p>🔍 Explore Gateway: <a target="_blank" href="https://gateway.tech">https://gateway.tech</a><br />🌐 Launch encrypted compute: <a target="_blank" href="https://lilypad.tech">https://lilypad.tech</a></p>
]]></content:encoded></item><item><title><![CDATA[Lilypad Module Builder Guide]]></title><description><![CDATA[Running AI workloads can be resource-intensive, often requiring expensive GPUs or centralized cloud infrastructure. But what if you could deploy AI models on a decentralized network and run inference jobs without owning high-end hardware?
This is whe...]]></description><link>https://blog.lilypad.tech/lilypad-module-builder-guide</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-module-builder-guide</guid><category><![CDATA[AI]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[research]]></category><category><![CDATA[guide]]></category><category><![CDATA[modules]]></category><category><![CDATA[code]]></category><category><![CDATA[Cryptocurrency]]></category><category><![CDATA[compute]]></category><dc:creator><![CDATA[Phil Billingsby]]></dc:creator><pubDate>Thu, 30 Jan 2025 22:19:15 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738271854599/3666cafc-9eae-47bf-b8e3-7fcc8257a527.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Running AI workloads can be resource-intensive, often requiring expensive GPUs or centralized cloud infrastructure. But what if you could deploy AI models on a decentralized network and run inference jobs without owning high-end hardware?</p>
<p>This is where <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/lilypad-modules"><strong>Lilypad Modules</strong></a> come in.</p>
<p>A Lilypad Module is a self-contained, task-specific computational unit designed specifically to run on the Lilypad network. It encapsulates everything needed to perform a specific job, such as input handling, model execution, workflow orchestration, and output generation. AI models are the backbone of most Lilypad Modules, enabling advanced computations such as natural language processing, image generation, and video synthesis. Lilypad modules leverage these models to deliver results efficiently and at scale.</p>
<p>Modules are structured as <strong>Git repositories</strong>, which makes them easy to manage, share and version. This structure makes sure that developers can organize their modules in a way that adheres to Lilypad’s standards while remaining flexible for customization.</p>
<p>So <strong>why</strong> are Lilypad Modules important? They allow users, researchers, developers and the like to add specific functionalities to the Lilypad Network, enabling access to decentralized compute resources that are capable of running AI inference jobs without requiring a high-performance GPU. Each module, tailored for specific tasks, serves as a unique building block to address the diverse needs across AI and computational workloads.</p>
<h2 id="heading-core-characteristics-of-a-lilypad-module">Core characteristics of a Lilypad module</h2>
<p>Each Lilypad module orchestrates AI workflows or computations with standardized input, processing, and output pipelines. By encapsulating all dependencies, modules enable isolated, reproducible computations, making them ideal for scaling workloads, sharing within the community or even collaborating on module development.</p>
<p>Lilypad modules are built around several essential components that work together to ensure functionality and versatility:</p>
<h3 id="heading-1-core-workflow-components"><strong>1. Core Workflow Components</strong></h3>
<p>These elements define how a module processes jobs and delivers results:</p>
<ul>
<li><p><strong>Input Handling</strong>: Manages data intake and preparation, adapting inputs to meet a module’s requirements.</p>
</li>
<li><p><strong>Task Logic</strong>: Encapsulates the computational process, including AI model execution, determining how jobs are processed.</p>
</li>
<li><p><strong>Output Generation</strong>: Formats and delivers results in a way that is actionable and accessible for users.</p>
</li>
</ul>
<h3 id="heading-2-operational-infrastructure"><strong>2. Operational Infrastructure</strong></h3>
<p>Modules require robust infrastructure to run efficiently and reliably:</p>
<ul>
<li><p><strong>Workflow Orchestration</strong>: Coordinates data flow and computations within a module.</p>
</li>
<li><p><strong>Dependency Management</strong>: Specifies necessary libraries, frameworks, and runtime environments (e.g., <code>requirements.txt</code> or Docker).</p>
</li>
<li><p><strong>Module Configuration</strong>: Customizes behavior through settings defined in files like <code>lilypad_module.json.tmpl</code>.</p>
</li>
</ul>
<h3 id="heading-3-reliability-and-scalability"><strong>3. Reliability and Scalability</strong></h3>
<p>To handle diverse environments and workloads, modules incorporate:</p>
<ul>
<li><p><strong>Logging and Monitoring</strong>: Tracks performance and errors, aiding debugging and optimization.</p>
</li>
<li><p><strong>Error Handling</strong>: Safeguards against invalid inputs or unexpected failures with mechanisms like try-catch blocks.</p>
</li>
<li><p><strong>Scalability</strong>: Ensures a module runs efficiently across various environments, including GPUs, CPUs or decentralized nodes.</p>
</li>
</ul>
<p><strong>The most common files found in a Lilypad module include:</strong></p>
<ul>
<li><p><code>Dockerfile</code>: Defines the containerized environment for a module, specifying the runtime dependencies, such as system packages, libraries, and configurations needed to run a module on nodes on the Lilypad network.</p>
</li>
<li><p><code>requirements.txt</code>: Lists the Python dependencies required by a module, such as AI frameworks (e.g., TensorFlow, PyTorch) and utility libraries.</p>
</li>
<li><p><code>lilypad_module.json.tmpl</code>: This template file defines a module's metadata, including its name, description, required inputs, outputs and other configurations.</p>
</li>
<li><p>Inference script: The core script (e.g., <code>run_inference.py</code>) that handles the execution of the AI model or task, including processing inputs, running computations and generating outputs. This is the entry point file for running modules.</p>
</li>
<li><p>Model directory: A module must execute correctly at runtime. With the model stored in a <code>model</code> directory, the model is bundled within the container, making it readily accessible at runtime without requiring a separate download step.</p>
</li>
</ul>
<h2 id="heading-downloading-the-model">Downloading the model</h2>
<p>Downloading a model is an essential step to ensure it can be referenced and utilized offline at runtime. In general, this process involves retrieving both the model and its tokenizer or configuration files from a model hub or repository. The exact script used to download a model will vary depending on the type of model and the library it belongs to, such as Hugging Face Transformers or TensorFlow.</p>
<p>In <code>transformers</code>, different model classes are designed for specific tasks. <code>AutoModelForSeq2SeqLM</code> is used for <strong>sequence-to-sequence (Seq2Seq) models</strong>, which have both an encoder and a decoder, making them suitable for tasks like translation, summarization, and text-to-text generation (e.g., T5, BART). On the other hand, <strong>causal language models (CLMs)</strong>, like Falcon and GPT, use <code>AutoModelForCausalLM</code> and generate text in an <strong>autoregressive</strong> manner, predicting one token at a time without an encoder. Choosing the right class ensures the model functions correctly within your AI module. The structure of the script and the methods for downloading and saving the model can differ between models and libraries, so it’s important to tailor the process to the specific requirements of the model you're working with.</p>
<p>In the example below, the model and tokenizer are fetched using a library-specific method, such as <code>from_pretrained()</code> provided by the <code>AutoTokenizer</code> utility, which downloads the necessary files and configurations to a local directory. This directory (<code>/model</code>) serves as the runtime reference for the model during execution. Once downloaded, the model and tokenizer are saved locally using <code>save_pretrained()</code>, guaranteeing that they are available for repeated use without needing to redownload them.</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForCausalLM

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">download_model</span>():</span>
    model_name = <span class="hljs-string">"tiiuae/falcon-7b-instruct"</span>

    <span class="hljs-comment"># Load tokenizer and model</span>
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)

    <span class="hljs-comment"># Save the tokenizer and model</span>
    tokenizer.save_pretrained(<span class="hljs-string">'./model'</span>)
    model.save_pretrained(<span class="hljs-string">'./model'</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    download_model()
</code></pre>
<p>Run the script to begin the download. (Note: This may take a few minutes depending on the size of the model):</p>
<p><code>python download.py</code></p>
<h2 id="heading-dependencies">Dependencies</h2>
<p>The <code>requirements.txt</code> file defines the Python dependencies needed for a module to run. It ensures that all necessary libraries, frameworks, and tools are installed in a module’s runtime environment, enabling consistent execution across different nodes in the Lilypad network.</p>
<p>For example, in the <strong>Falcon-7B module</strong> we explore below, the following dependencies might be included in <code>requirements.txt</code>:</p>
<pre><code class="lang-plaintext">transformers==4.36.0
torch==2.1.0
numpy&lt;2.0.0
accelerate==0.25.0
bitsandbytes&gt;=0.41.1
</code></pre>
<p>These libraries support essential tasks such as loading models (<code>transformers</code>), running computations with hardware acceleration (<code>torch</code>) and defining configurations for generation workflows. However, the contents of <code>requirements.txt</code> will vary based on the specific requirements of your module. For instance:</p>
<ul>
<li><p>A module handling <strong>natural language processing (NLP)</strong> might include <code>transformers</code> for pre-trained models and <code>sentencepiece</code> for tokenization.</p>
</li>
<li><p>A module designed for <strong>image processing</strong> could require <code>torchvision</code> or <code>Pillow</code>.</p>
</li>
<li><p>A module for <strong>custom AI workflows</strong> might include <code>scipy</code>, <code>numpy</code>, or other task-specific libraries.</p>
</li>
</ul>
<h2 id="heading-creating-the-dockerfile">Creating the Dockerfile</h2>
<p>The <code>Dockerfile</code> is a fundamental component of a Lilypad module, defining the containerized environment in which the module runs. It specifies everything needed to build and execute the module, from the base operating system to the libraries, runtime dependencies, and execution commands. This allows the module to operate consistently across diverse environments such as Lilypad.</p>
<p>By containerizing a module, the <code>Dockerfile</code> encapsulates all dependencies and configurations, eliminating compatibility issues that might arise from differences in operating systems, installed packages, or hardware. It also simplifies deployment, as nodes can pull and run the prebuilt container without additional setup.</p>
<p>Different modules may require different configurations, though here is an example of the Dockerfile used for the Falcon 7B module below:</p>
<pre><code class="lang-docker"><span class="hljs-comment"># Specify architecture</span>
<span class="hljs-keyword">FROM</span> --platform=linux/amd64 python:<span class="hljs-number">3.9</span>-slim as builder

<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Install build dependencies</span>
<span class="hljs-keyword">RUN</span><span class="bash"> apt-get update &amp;&amp; \\
    apt-get install -y --no-install-recommends \\
    build-essential \\
    &amp;&amp; rm -rf /var/lib/apt/lists/*</span>

<span class="hljs-comment"># Copy and install requirements</span>
<span class="hljs-keyword">COPY</span><span class="bash"> requirements.txt .</span>
<span class="hljs-keyword">RUN</span><span class="bash"> pip install --no-cache-dir -r requirements.txt</span>

<span class="hljs-comment"># Copy installed packages from builder</span>
<span class="hljs-keyword">COPY</span><span class="bash"> --from=builder /usr/<span class="hljs-built_in">local</span>/lib/python3.9/site-packages /usr/<span class="hljs-built_in">local</span>/lib/python3.9/site-packages</span>

<span class="hljs-comment"># Create outputs directory</span>
<span class="hljs-keyword">RUN</span><span class="bash"> mkdir -p /outputs</span>
<span class="hljs-keyword">RUN</span><span class="bash"> chmod 777 /outputs</span>

<span class="hljs-comment"># Copy the inference script</span>
<span class="hljs-keyword">COPY</span><span class="bash"> run_inference.py .</span>

<span class="hljs-comment"># Copy the download script</span>
<span class="hljs-keyword">COPY</span><span class="bash"> download_module.py .</span>

<span class="hljs-keyword">RUN</span><span class="bash"> python3 download_module.py</span>

<span class="hljs-comment"># Set outputs directory as a volume</span>
<span class="hljs-keyword">VOLUME</span><span class="bash"> /app/outputs</span>

<span class="hljs-comment"># Run the inference script</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"python"</span>, <span class="hljs-string">"run_inference.py"</span>]</span>
</code></pre>
<h2 id="heading-building-your-module">Building your module</h2>
<p>The inference script is the entry point for a Lilypad module, handling inputs, executing computations, and producing outputs. Designed for diverse environments, particularly Lilypad's infrastructure, it relies on dynamic configurations, such as environment variables for user data or model directories, to remain flexible and avoid hardcoded parameters.</p>
<p>While this guide demonstrates one approach to structuring a text-to-text inference script, the implementation will vary depending on your model and its requirements. Some models, like those using Hugging Face, support standard libraries, while others, such as the SDXL module, require custom initialization. For more examples and approaches, visit our <a target="_blank" href="https://github.com/Lilypad-Tech/awesome-Lilypad?tab=readme-ov-file#modules">module examples</a>.</p>
<p>One important thing to note is that Lilypad modules operate within a controlled execution environment where <strong>network access is completely restricted</strong>. This design ensures security, reproducibility and prevents unintended data leaks or external dependencies. Since modules cannot fetch external resources or communicate over the internet, any required data must be provided as inputs at runtime.</p>
<p>The script’s core responsibility is to encapsulate task logic in reusable functions, processing inputs and generating outputs for tasks such as paraphrasing or image generation. This modularity allows it to adapt to various models and ensures compatibility with decentralized execution. The standard and expected output is required to be in JSON format.</p>
<p>Let’s examine the <strong><em>Falcon-7B</em></strong> module, which takes a text input, generates a response using a locally stored model, and saves the results in JSON format:</p>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoTokenizer, AutoModelForCausalLM, GenerationConfig
<span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> json

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">clean_response</span>(<span class="hljs-params">text</span>):</span>
    <span class="hljs-comment"># Find the start of Assistant's response</span>
    assistant_start = text.find(<span class="hljs-string">"Assistant: "</span>)
    <span class="hljs-keyword">if</span> assistant_start != <span class="hljs-number">-1</span>:
        <span class="hljs-comment"># Get everything after "Assistant: "</span>
        response = text[assistant_start + len(<span class="hljs-string">"Assistant: "</span>):]
        <span class="hljs-comment"># Find where the next "User: " starts (if it exists)</span>
        user_start = response.find(<span class="hljs-string">"\\nUser"</span>)
        <span class="hljs-keyword">if</span> user_start != <span class="hljs-number">-1</span>:
            <span class="hljs-comment"># Only take the text up to the next "User: "</span>
            response = response[:user_start]
        <span class="hljs-keyword">return</span> response.strip()
    <span class="hljs-keyword">return</span> text.strip()

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    <span class="hljs-comment"># Create outputs directory if it doesn't exist</span>
    os.makedirs(<span class="hljs-string">"outputs"</span>, exist_ok=<span class="hljs-literal">True</span>)

    <span class="hljs-comment"># Get input from environment variable, or use default if not provided</span>
    input_text = os.getenv(<span class="hljs-string">"MODEL_INPUT"</span>, <span class="hljs-string">"Tell me a story about a giraffe."</span>)

    local_path = <span class="hljs-string">"./local-falcon-7b-instruct"</span>
    print(<span class="hljs-string">f"Loading model from <span class="hljs-subst">{local_path}</span>..."</span>)

    <span class="hljs-comment"># Load tokenizer and model from local path</span>
    tokenizer = AutoTokenizer.from_pretrained(local_path, local_files_only=<span class="hljs-literal">True</span>)
    <span class="hljs-comment"># Set pad token to eos token if not set</span>
    <span class="hljs-keyword">if</span> tokenizer.pad_token <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
        tokenizer.pad_token = tokenizer.eos_token

    model = AutoModelForCausalLM.from_pretrained(
        local_path,
        torch_dtype=torch.bfloat16,
        device_map=<span class="hljs-string">"auto"</span>,  <span class="hljs-comment"># This will use GPU if available, otherwise CPU</span>
        local_files_only=<span class="hljs-literal">True</span>,
        pad_token_id=tokenizer.pad_token_id
    )

    <span class="hljs-comment"># Get the device that the model is on</span>
    device = model.device

    <span class="hljs-comment"># Set up generation config</span>
    generation_config = GenerationConfig(
        max_new_tokens=<span class="hljs-number">256</span>,
        pad_token_id=tokenizer.pad_token_id,
        eos_token_id=tokenizer.eos_token_id,
        do_sample=<span class="hljs-literal">True</span>,
        temperature=<span class="hljs-number">0.7</span>,
        top_p=<span class="hljs-number">0.9</span>
    )
    model.generation_config = generation_config

    <span class="hljs-comment"># We use the tokenizer's chat template to format each message</span>
    messages = [
        {<span class="hljs-string">"role"</span>: <span class="hljs-string">"user"</span>, <span class="hljs-string">"content"</span>: input_text},  <span class="hljs-comment"># Use the environment variable input</span>
    ]

    input_text = tokenizer.apply_chat_template(messages, tokenize=<span class="hljs-literal">False</span>, add_generation_prompt=<span class="hljs-literal">True</span>)
    <span class="hljs-comment"># Include attention mask in tokenization</span>
    inputs = tokenizer(
        input_text, 
        return_tensors=<span class="hljs-string">"pt"</span>,
        padding=<span class="hljs-literal">True</span>,
        truncation=<span class="hljs-literal">True</span>,
        max_length=<span class="hljs-number">2048</span>,
        return_attention_mask=<span class="hljs-literal">True</span>
    )

    <span class="hljs-comment"># Move input tensors to the same device as the model</span>
    inputs = {k: v.to(device) <span class="hljs-keyword">for</span> k, v <span class="hljs-keyword">in</span> inputs.items()}

    outputs = model.generate(
        input_ids=inputs[<span class="hljs-string">"input_ids"</span>],
        attention_mask=inputs[<span class="hljs-string">"attention_mask"</span>]
    )
    generated_text = tokenizer.decode(outputs[<span class="hljs-number">0</span>], skip_special_tokens=<span class="hljs-literal">True</span>)

    <span class="hljs-comment"># Clean up the response to get just the assistant's part</span>
    clean_output = clean_response(generated_text)

    <span class="hljs-comment"># Prepare output data</span>
    output_data = {
        <span class="hljs-string">"prompt"</span>: input_text.strip(),
        <span class="hljs-string">"response"</span>: clean_output
    }

    print(<span class="hljs-string">f"Generated text: <span class="hljs-subst">{clean_output}</span>"</span>)
    print(<span class="hljs-string">f"Output data: <span class="hljs-subst">{output_data}</span>"</span>)

    output_path = <span class="hljs-string">f'/outputs/results.json'</span>
    os.makedirs(os.path.dirname(output_path), exist_ok=<span class="hljs-literal">True</span>)

    <span class="hljs-comment"># Save to JSON file</span>
    <span class="hljs-keyword">with</span> open(output_path, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> f:
        json.dump(output_data, f, indent=<span class="hljs-number">2</span>)

    print(<span class="hljs-string">f"Results saved to <span class="hljs-subst">{output_path}</span>"</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    main()
</code></pre>
<h2 id="heading-testing-your-module-locally">Testing your module locally</h2>
<p>Once you’re happy with how your code looks for the inference script, we can build and containerize the module and start testing it by running a job.</p>
<p>Please note that if you’re building a module with a high-compute model, running jobs locally may fail or perform poorly if your system doesn’t have a GPU capable of handling the model's requirements. Using a GPU-enabled environment is strongly recommended for such cases.</p>
<p>To build the image, run the following:</p>
<p><code>docker build -t &lt;MODULE_NAME&gt;:&lt;MODULE_TAG&gt; .</code></p>
<p>Once the build has completed, to run it you will need to specify the inputs for the job, along with where the results are being stored (<code>/outputs</code>) and finally the module name and tag from the previous build step.</p>
<pre><code class="lang-bash">docker run -e INPUT_TEXT=<span class="hljs-string">"Today was a good day."</span> \\
-v $(<span class="hljs-built_in">pwd</span>)/outputs:/outputs \\
&lt;MODULE_NAME&gt;:&lt;MODULE_TAG&gt;
</code></pre>
<p>If you run into any issues while testing locally, check your Docker logs. This will help you identify what part of the process is causing issues: <code>docker logs &lt;CONTAINER_ID&gt;</code></p>
<h2 id="heading-uploading-docker-image">Uploading Docker image</h2>
<p>To make your Lilypad module accessible on the network, you'll need to upload your Docker image to a container registry, such as DockerHub.</p>
<p>This step is necessary because Lilypad resource providers will pull your module’s image to execute jobs. In this guide we will be using Docker Hub so please refer to the <a target="_blank" href="https://docs.docker.com/docker-hub/repos/create/">official Docker Hub guide to create your repository</a>.</p>
<p>The approach for macOS differs because Mac systems typically use a different architecture (ARM64) compared to the Linux based environments (Linux/AMD64) where Lilypad modules are executed. To make the Docker image compatible with Linux resource providers, the <code>docker buildx</code> command is used on macOS, allowing the builder to specify the target platform using <code>--platform linux/amd64</code>.</p>
<p>For Linux: <code>docker build -t &lt;USERNAME&gt;/&lt;MODULE_NAME&gt;:&lt;MODULE_TAG&gt; --push .</code></p>
<p>For macOS:</p>
<pre><code class="lang-bash">docker buildx build \\
--platform linux/amd64 \\
-t &lt;USERNAME&gt;/&lt;MODULE_NAME&gt;:&lt;MODULE_TAG&gt; \\
--push \\
.
</code></pre>
<h2 id="heading-pushing-your-module-to-github">Pushing your module to GitHub</h2>
<p>The <code>lilypad_module.json.tmpl</code> file serves as the interface between a module and Lilypad, defining how jobs are executed and allowing users to customize inputs, outputs,and tunable parameters. This file is key to making your module functional and adaptable, because it provides the specifications needed for job execution and resource allocation. It defines the compute resources (e.g., GPU, CPU), job execution details such as the Docker image, entrypoint, and environment variables, output directories for results, resource requirements, job concurrency and timeouts.</p>
<p>Inputs vary from model to model, so declaring them in this file is crucial for handling user inputs for a module. The <code>EnvironmentVariables</code> sections centralizes all user inputs (including tunables like seeds, steps, batch size etc) and prepares them as environment variables for the containerized job environment.</p>
<p>Here is an example of the file for the Falcon 7B module:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"machine"</span>: {
        <span class="hljs-attr">"gpu"</span>: <span class="hljs-number">1</span>,
        <span class="hljs-attr">"cpu"</span>: <span class="hljs-number">1000</span>,
        <span class="hljs-attr">"ram"</span>: <span class="hljs-number">8000</span>
    },
    <span class="hljs-attr">"job"</span>: {
        <span class="hljs-attr">"APIVersion"</span>: <span class="hljs-string">"V1beta1"</span>,
        <span class="hljs-attr">"Spec"</span>: {
            <span class="hljs-attr">"Deal"</span>: {
                <span class="hljs-attr">"Concurrency"</span>: <span class="hljs-number">1</span>
            },
            <span class="hljs-attr">"Docker"</span>: {
                <span class="hljs-attr">"Entrypoint"</span>: [<span class="hljs-string">"python"</span>, <span class="hljs-string">"/app/run_inference.py"</span>],
                <span class="hljs-attr">"WorkingDirectory"</span>: <span class="hljs-string">"/app"</span>,
                <span class="hljs-attr">"EnvironmentVariables"</span>: [
                    {{ if .MODEL_INPUT }}<span class="hljs-string">"MODEL_INPUT={{ js .MODEL_INPUT }}"</span>{{ else }}<span class="hljs-string">"MODEL_INPUT=Write a haiku about Lilypads"</span>{{ end }},
                    <span class="hljs-string">"HF_HUB_OFFLINE=1"</span>
                ],
                <span class="hljs-attr">"Image"</span>: <span class="hljs-string">"narbs91/lilypad-falcon-7b-instruct-modulev8:latest"</span>
            },
            <span class="hljs-attr">"Engine"</span>: <span class="hljs-string">"Docker"</span>,
            <span class="hljs-attr">"Network"</span>: {
                <span class="hljs-attr">"Type"</span>: <span class="hljs-string">"None"</span>
            },
            <span class="hljs-attr">"Outputs"</span>: [
                {
                    <span class="hljs-attr">"Name"</span>: <span class="hljs-string">"outputs"</span>,
                    <span class="hljs-attr">"Path"</span>: <span class="hljs-string">"/outputs"</span>
                }
            ],
            <span class="hljs-attr">"PublisherSpec"</span>: {
                <span class="hljs-attr">"Type"</span>: <span class="hljs-string">"ipfs"</span>
            },
            <span class="hljs-attr">"Resources"</span>: {
                <span class="hljs-attr">"GPU"</span>: <span class="hljs-string">"1"</span>
            },
            <span class="hljs-attr">"Timeout"</span>: <span class="hljs-number">600</span>
        }
    }
}
</code></pre>
<p>Before pushing your changes to GitHub, you must change the image reference in the <code>lilypad_module.json.tmpl</code> file. Take note of the line in the code below for <code>"Image": "narbs91/lilypad-falcon-7b-instruct-modulev8:latest"</code>. This points to the latest build for that image. You can specify it by using the same <code>&lt;USERNAME&gt;/&lt;MODULE_NAME&gt;:&lt;MODULE_TAG&gt;</code> structure we used when building and pushing the image.</p>
<p>Create a new repository and name it according to your desired module name. Push all your code to this repository. When running the module with the Lilypad CLI, you’ll need to either retrieve the commit hash or tag a specific version to use.</p>
<h2 id="heading-testing-module-on-lilypad">Testing module on Lilypad</h2>
<p>To test your Lilypad module, you will need the following things before running the CLI command:</p>
<ul>
<li><p>Have the <a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-testnet/install-run-requirements">Lilypad CLI installed</a></p>
</li>
<li><p>Module repo link (<code>github.com/&lt;USERNAME&gt;/&lt;REPO_NAME&gt;</code>)</p>
</li>
<li><p>The desired commit hash (SHA) or tag</p>
</li>
</ul>
<p>Once you have all of the above you can open your terminal and run your command. Note that your command will look different based on the inputs you’ve declared in the <code>lilypad_module.json.tmpl</code> file. For example, using the Falcon 7b module that was illustrated in the previous sections would look like:</p>
<p><code>lilypad run --network &lt;NETWORK&gt; github.com/&lt;USERNAME&gt;/&lt;REPO_NAME&gt;:&lt;SHA_OR_TAG&gt; --web3-private-key &lt;PK&gt; -i &lt;INPUT_NAME&gt;='&lt;INPUT&gt;'</code></p>
<p><code>lilypad run --network demonet github.com/narbs91/lilypad-falcon-7b-instruct-module:v1.8.0 --web3-private-key b3994e7660abe5f65f729bb64163c6cd6b7d0b1a8c67881a7346e3e8c7f026f5 -i MODEL_INPUT='Write me a haiku about Lilypads'</code></p>
<p>When running a Module on DemoNet, if the job run appears to be stuck after a few minutes (sometimes it takes time for a Module to download to the RP node), cancel the job and try again. Open a ticket in the Lilypad <a target="_blank" href="https://discord.com/channels/1212897693450641498/1230231823674642513">Discord</a> with any issues that persist.</p>
<p>If your module is configured correctly, you should be able to run a job successfully! The CLI will return results and store them in a directory inside of your <code>/tmp/lilypad/data/downloaded-files/</code>.</p>
<h2 id="heading-conclusion-and-next-steps">Conclusion and next steps</h2>
<p>Lilypad modules serve as the foundation for enabling decentralized AI workloads, providing a standardized yet flexible framework for running diverse computational tasks on the Lilypad network. By encapsulating input handling, task logic, and output generation into self-contained units, these modules empower developers, researchers and resource providers to contribute to a scalable and collaborative ecosystem.</p>
<p>As next steps, explore creating modules tailored to your specific AI tasks or computational workloads. Follow best practices for optimization, such as efficient dependency management, logging and error handling. You can also contribute to the growing Lilypad ecosystem by sharing your module, collaborating with the community and refining it based on real-world use cases.</p>
<h2 id="heading-resources">Resources</h2>
<ul>
<li><p><a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/lilypad-modules/build-a-job-module">Lilypad module builder docs</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Lilypad-Tech/awesome-Lilypad?tab=readme-ov-file#modules">awesome-lilypad modules</a> - A collection of modules available on Lilypad</p>
</li>
<li><p><a target="_blank" href="https://github.com/narbs91/lilypad-falcon-7b-instruct-module">Falcon 7B Lilypad module code</a> - The module used as an example in this guide</p>
</li>
</ul>
<p>Help us improve! We’d love to hear about your experiences building modules. Please <a target="_blank" href="https://github.com/orgs/Lilypad-Tech/discussions/new?category=builder-feedback">start a new discussion here</a> and provide as much information as possible! Any feedback is very much appreciated.</p>
]]></content:encoded></item><item><title><![CDATA[Lilypad Builder-verse!]]></title><description><![CDATA[At the Lilypad Network, our mission is to democratize access to compute resources with tooling to deploy powerful Agents and open source LLM workflows. To achieve our mission of accelerating open source AI, we’re enlisting the help of the Lilypad com...]]></description><link>https://blog.lilypad.tech/lilypad-builder-verse</link><guid isPermaLink="true">https://blog.lilypad.tech/lilypad-builder-verse</guid><category><![CDATA[AI]]></category><category><![CDATA[aitools]]></category><category><![CDATA[research]]></category><category><![CDATA[build]]></category><category><![CDATA[modules]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Developer]]></category><category><![CDATA[Developer Tools]]></category><category><![CDATA[innovation]]></category><dc:creator><![CDATA[Devlin Rocha]]></dc:creator><pubDate>Tue, 28 Jan 2025 19:24:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1738087477037/85cc9ffe-70e4-4b16-8744-5badc5160792.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At the Lilypad Network, our mission is to democratize access to compute resources with tooling to deploy powerful Agents and open source LLM workflows. To achieve our mission of accelerating open source AI, we’re enlisting the help of the Lilypad community to <strong>build systems that solve real-world problems</strong>!</p>
<h1 id="heading-introducing-the-lilypad-builder-verse">Introducing the Lilypad Builder-verse</h1>
<p><strong>Today, we’re launching the Lilypad Builder-verse,</strong> a hub for developers, labs, and businesses to access with guides, workshops, and live streams centered around deploying LLM workflows! This includes creating containerized LLM programs as <strong>Lilypad (Modules)</strong> and using <a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-modules/modules-intro">Lilypad Modules</a> within LLM workflows like Agent pipelines! Community members can <a target="_blank" href="https://blog.lilypadnetwork.org/lilypad-module-creator-rewards-beta">earn rewards</a> for building on the network and growing the popularity of their projects.</p>
<p>Learn more in the <a target="_blank" href="https://docs.lilypad.tech/lilypad">docs</a> about Lilypad Modules and MLops tooling provided to run inference on the network!</p>
<h1 id="heading-the-builder-verse-amp-incentivenet-stage-2">The Builder-verse &amp; IncentiveNet Stage 2</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1738088484136/b72b7669-c95d-4b02-98c7-6862a05853a9.png" alt class="image--center mx-auto" /></p>
<p>As the second stage of <a target="_blank" href="https://blog.lilypadnetwork.org/update-to-the-lilybit-rewards-calculation#heading-incentivenet-timeline">IncentiveNet</a>, the Builder-verse provides an opportunity to <a target="_blank" href="https://blog.lilypadnetwork.org/lilypad-module-creator-rewards-beta">earn Lilybit rewards</a> just by building on the network!</p>
<ul>
<li><p><a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/lilypad-modules/build-a-job-module">Build</a> and submit a Module to the Lilypad <a target="_blank" href="https://github.com/Lilypad-Tech/awesome-Lilypad/blob/main/README.md#modules">Module Marketplace</a>.</p>
</li>
<li><p>Build Agents and applications that use Lilypad Modules to scale parallel inference.</p>
</li>
<li><p>Promote your work to the community!</p>
</li>
</ul>
<p>The more usage your module or Agent gets from the ecosystem, the more rewards you earned! Check out the <a target="_blank" href="https://oss.lilypad.tech/">leaderboard</a> to see the rewards community in action!</p>
<p><a target="_blank" href="https://lu.ma/lilypadnetwork">Join us</a> each week for Builders Live, where we’ll host:</p>
<ul>
<li><p>Community demos of Lilypad Modules and LLM workflows (come demo your project!)</p>
</li>
<li><p>Workshops on “How to Run LLMs on Lilypad with a Module,” “How to Build an Agent Workflow on Lilypad,” and more</p>
</li>
</ul>
<p>Looking for inspiration? Our team put together a <a target="_blank" href="https://blog.lilypadnetwork.org/fuel-the-future-by-building-on-lilypad-and-accelerate-open-source-ai">Request for Modules</a> based on community input, plus a guide for <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/running-lilypad-in-a-front-end">Building a Front End on Lilypad</a>!</p>
<p>Stay tuned for a full <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/lilypad-modules/build-a-job-module">Module creation guide</a> and an <strong>Agent cookbook</strong> with templates and best practices for deploying Agent workflows on Lilypad. For inspiration, check out the Agent projects we are working on with partners and will be deploying to Lilypad in production soon!</p>
<ul>
<li><p><a target="_blank" href="https://github.com/noryev/local-rag">Lilypad Support Agent</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/mavericb/ai-oncologist">AI Oncologist Research Agent</a></p>
</li>
</ul>
<h1 id="heading-build-and-earn">Build and Earn!</h1>
<p>Our <a target="_blank" href="https://oss.lilypad.tech/">Module Creator Leaderboard</a> launched recently, and we’re eager to see new submissions! Anyone who <strong>creates a working Module, Agent workflow, or Integration</strong> will be rewarded in <strong>Lilybits</strong>. Generate a lot of usage? Earn <strong>bonus Lilybits</strong> based on the demand your work creates!</p>
<p>To submit a Module or Agent workflow for rewards, follow the directions in this <a target="_blank" href="https://blog.lilypadnetwork.org/lilypad-module-creator-rewards-beta">guide</a>!</p>
<h1 id="heading-whats-a-lilypad-module">What’s a Lilypad Module?</h1>
<p>A <strong>Lilypad</strong> <strong>Module</strong> is a Git repository that uses predefined templates and inputs to perform various tasks on the Lilypad Network. Once created, a Module can be called by any builder simply by using the <strong>Lilypad</strong> <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/inference-api"><strong>Inference API</strong></a><strong>,</strong> <a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-testnet/install-run-requirements"><strong>CLI</strong></a> or other <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/js-cli-wrapper-local">MLops tooling</a> (hosted API endpoint coming soon!).</p>
<ul>
<li><p><strong>Accelerate Open Source AI</strong>: Once a Module is built, anyone can build an Agent or application running that Module on Lilypad with instant load balancing and blockchain-verified compute jobs.</p>
</li>
<li><p><strong>Instant Deployment</strong>: Containerize your LLM in a Docker-based Lilypad module, and anyone on the network can call it in an Agent workflow or an application.</p>
</li>
<li><p><strong>Scalable Infrastructure</strong>: Tap into Lilypad’s inference network, where workloads are balanced automatically.</p>
</li>
<li><p><strong>Accessible Tooling</strong>: Use <a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-testnet/install-run-requirements">the <strong>Lilypad CLI</strong></a> and other <a target="_blank" href="https://docs.lilypad.tech/lilypad/developer-resources/js-cli-wrapper-local">developer resources</a> (hosted API coming soon!) to streamline your AI deployments.</p>
</li>
</ul>
<h1 id="heading-what-can-i-build-on-lilypad">What can I build on Lilypad?</h1>
<p>Once a Module is live on Lilypad, developers can build and deploy Agent workflows and LLM based applications easily! This unique offering allows developers to skip the tedious and costly process of deploying LLMs to infrastructure and load balancing the processes. Just choose a Lilypad module to run, then use our MLops tooling your apps backend.</p>
<p>Lilypad specializes at supporting large scale parallel processing for LLM workflows within a serverless-style environment. Lilypad is currently in Testnet and in order to support parallel processing at scale, our team is carefully scaling up the GPU supply on the network.</p>
<p>Check out the <a target="_blank" href="https://github.com/Lilypad-Tech/awesome-Lilypad?tab=readme-ov-file#use-cases">awesome-Lilypad</a> repo for examples like the <a target="_blank" href="https://docs.lilypad.tech/lilypad/use-cases/lilypad-ml-workbench">ML workbench</a>, <a target="_blank" href="https://docs.lilypad.tech/lilypad/use-cases/waterlily.ai">WaterLily.AI</a>, and more!<br />We’re also collaborating with amazing partners on Agent workflows for market research, protein folding, and more <strong>stay tuned</strong>!</p>
<h1 id="heading-what-about-rewards-for-resource-providers">What about rewards for Resource Providers?</h1>
<p>The <a target="_blank" href="https://blog.lilypadnetwork.org/update-to-the-lilybit-rewards-calculation#heading-incentivenet-timeline">Lilypad IncentiveNet</a> (Testnet) continues with Rewards opportunities for both Builders and Resource Providers (GPUs)! For the RP community, our team has been hard at work re-architecting the IncentiveNet rewards system to align with Mainnet and will launch a RP beta program as soon as possible.</p>
<p>The RP Beta program will employ a verification system with our team vetting nodes that would like to join the network to ensure they can reliably run jobs when online. This involves many factors including meeting the hardware + internet speed requirements, keeping the node up to date, ensuring the full resources of the node are available to run jobs, etc. The proces will ensure high service quality for end users of the network!</p>
<p>Sign up for the RP Beta program <a target="_blank" href="https://forms.gle/qrreXoDNtxDppPik9">here</a> and stay tuned in the <a target="_blank" href="https://discord.com/channels/1212897693450641498/1256179769356189707">updates-RP</a> discord channel for announcements on the next steps.</p>
<h1 id="heading-stay-connected">Stay Connected 🌐</h1>
<p><strong>We can’t wait to see what you build in the Lilypad Builder-verse!</strong> For more more details, check out our high level roadmap can be found <a target="_blank" href="https://lilypad.tech/#roadmap">on the Lilypa</a><a target="_blank" href="https://docs.lilypad.tech/lilypad/lilypad-testnet/install-run-requirements">d website.</a></p>
<p>Join the Lilypad Discord and follow us on socials to stay up to date with the action!</p>
<ul>
<li><p><a target="_blank" href="https://discord.com/invite/WtHbjMP5UB">Discord</a></p>
</li>
<li><p><a target="_blank" href="https://twitter.com/lilypad_tech">Twitter/X</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/Lilypad-Tech/">GitHub</a></p>
</li>
</ul>
]]></content:encoded></item></channel></rss>