Skip to main content

2 posts tagged with "LooPIN"

View All Tags

LooPIN - TestNet Summary and Main Net Upgrades

· 7 min read
Jessica Davis
Researcher @ Loopro AI

LooPIN's test net launched on April 9, 2024, and over 190 days have passed since. During this time, the protocol has undergone extensive testing. While its robustness has been confirmed, several potential improvements have been identified. This blog summarizes our test net report and highlights the changes implemented for the main testing phase.

Liquidity Providers and Proof-of-Computing-Power-Staking (PoCPS)

At LooPIN, our goal is to provide a robust protocol that offers stable computing resources to users while ensuring long-term engagement from both users and liquidity providers. The stability of the computing resources supplied by liquidity providers is crucial to enhancing the usability of the protocol. We aim to create a system where users consistently rely on LooPIN, and liquidity providers are motivated to offer stable computing power. However, several issues emerged during our testnet phase:

  1. Stability of Computing Resources: The original Proof of Computing Power Stability (PoCPS) we proposed has proven insufficient for achieving enterprise-level stability. Some liquidity providers had GPUs that passed PoCPS verification, but when users purchased GPU hours, the machines failed to meet their performance requirements. New nodes lacked a proper test period during the testnet phase, leading to instability in resource availability.
  2. Liquidity Token Volatility: While this issue didn’t occur during the testnet phase, we anticipate that liquidity providers might sell large amounts of LooPIN tokens in the mainnet, prompting others to do the same. This could result in token instability and hinder the protocol's scalability.
  3. Data Privacy Concerns: AI researchers, one of our key user groups, expressed concerns about data safety. The original PoCPS algorithm did not include measures to protect user data privacy.

While the original PoCPS is a robust Proof of Work (PoW) algorithm designed to prevent several major attack vectors, it does not address the critical needs of stability, scalability, or privacy. To tackle these challenges while maintaining the PoW foundation of our protocol, we’ve extended the original PoCPS framework to include:

PoCPS=PoT×PoL×PoP×PoW\text{PoCPS} = \text{PoT}\times\text{PoL}\times\text{PoP}\times\text{PoW}

Where:

  • PoT (Proof-of-Time) addresses time-based stability testing,
  • PoL (Proof-of-Loyalty) focuses on liquidity management and control,
  • PoP (Proof-of-Privacy) enhances privacy safeguards for user data, and
  • PoW (Proof-of-Work) represents the original PoCPS proposed in our whitepaper.

This new structure aims to improve stability, scalability, and privacy while preserving the core elements of our protocol.

The PoT, PoL, PoP, and PoW can be defined as follows:

PoT (Proof-of-Time): The Proof-of-Time rewards are based on how long a node has been part of the protocol. The longer a node remains active, the greater the rewards it can earn, capped by the number of nodes in the network. The formula for PoT is:

PoT=exp[αmin(0,tT0)T0]\text{PoT} = \exp\left[\alpha\frac{\min(0, t-T_0)}{T_0}\right]

Where

  • tt is the time the node has been in the protocol,
  • T0T_0 is the ramp-up period after which a node receives full rewards,
  • α\alpha determines the rewards for new nodes.

Currently,

  • α=2.303\alpha = 2.303 meaning that a newly added node can earn 10% of the total rewards until tT0t \geq T_0, and
  • T0=30T_0 = 30 days in the main net phase.

PoL (Proof-of-Loyalty): The Proof-of-Loyalty is related to how much of the LooPIN token the node has sold. Nodes that sell more tokens show lower loyalty and thus receive lower rewards. The formula for PoL is:

PoL=exp[βmin(0,CheldCminted)Cminted]\text{PoL} = \exp\left[\beta\frac{\min(0, C_{\text{\tiny held}} - C_{\text{\tiny minted}})}{C_{\text{\tiny minted}}}\right]

Where

  • CheldC_{\text{\tiny held}} is the current amount of LooPIN tokens held by the node,
  • CmintedC_{\text{\tiny minted}} is the total amount of LooPIN tokens minted by the node over its lifespan,
  • β\beta controls the reward reduction for nodes that have sold a significant portion of their tokens.

Currently β=7.324\beta = 7.324.

PoP (Proof-of-Privacy): The Proof-of-Privacy rewards nodes based on how well they protect user data. Nodes that maintain higher privacy standards receive higher rewards. The formula for PoP is:

PoP=exp[γmin(0,PPlimit)Plimit]\text{PoP} = \exp\left[\gamma\frac{\min(0, P - P_{\text{\tiny limit}})}{P_{\text{\tiny limit}}}\right]

Where

  • PP is the node's privacy level,
  • PlimitP_{\text{\tiny limit}} is the benchmark privacy level,,
  • γ\gamma determines the reward reduction for nodes with lower privacy standards.

Currently γ=α=2.303\gamma = \alpha = 2.303.

PoW (Proof-of-Work): The Proof-of-Work rewards are based on the performance of the node compared to similar GPUs in the network. Nodes with better performance receive higher rewards. The formula for PoW is:

PoW=exp[δmin(0,WWbenchmark)Wbenchmark]\text{PoW} = \exp\left[\delta\frac{\min(0, W - W_{\text{\tiny benchmark}})}{W_{\text{\tiny benchmark}}}\right]

Where

  • WW is the performance level of the node,
  • WbenchmarkW_{\text{\tiny benchmark}} is the benchmark performance level,
  • δ\delta determines the reward reduction for underperforming nodes..

Currently δ=β=7.324.\delta = \beta = 7.324.

Liquidity of the Computing Power Pools

During our testnet phase, the protocol required liquidity providers to stake LooPIN tokens equivalent to 24 hours of computing power for each machine they added to the liquidity pool. The original rationale was that LooPIN tokens are limited, and staking a larger amount would be challenging. Additionally, token transfers between miners were uncommon, making it difficult for data centers to participate. As the testnet progressed, some issues became apparent:

  1. Due to the limited depth of the liquidity pool, computing power prices experienced significant fluctuations. For instance, the price of A100 GPUs swung from 1.3 to 0.9 in a single day, and this occurred frequently.

This volatility has been especially problematic for our users in universities and AI research labs, who have urged us to stabilize prices and make the protocol more user-friendly. To address these concerns, we've increased the staking requirement from 24 hours (one day) to 7 days (one week) for liquidity providers. With more staked computing hours and LooPIN tokens, the depth of the liquidity pools will increase, reducing price volatility to just 14% of what it was during the testnet phase. We can further increase the staking requirement, but we leave this for the next phase.

Speculative Resource Selling

During our testnet phase, the protocol allowed speculative sellers to offer their computing resources in hourly increments, with a minimum of 1 hour and a maximum of 24 hours. The intent behind this setup was to enhance liquidity, giving sellers as much flexibility as possible in selling their computing power. However, the testnet revealed several issues:

  1. For computing power sold in 1-hour increments, it was difficult for the protocol to match users before the time elapsed, leading to a high percentage of unused time.
  2. Most sellers were from data centers, and their demand to sell exceeded the 24-hour maximum.

Given that LooPIN aims to be a highly efficient protocol, reducing the percentage of unused time is essential. As a result, we've adjusted the selling resolution to 1 day, with a minimum selling period of 1 days (one day) and a maximum of 7 days (one week) in our main net phase. This also means that computing power sold to extract liquidity from the pool must maintain the machine's usability for a longer period.

Additionally, to further enhance network stability, we’re increasing the collateral amount from 1x to 100x the value of the sold GPU hours. This adjustment helps prevent malicious sellers from engaging in token price arbitrage.

Buyers

During our testnet phase, the protocol allowed buyers to purchase computing resources in hourly increments, with a minimum of 1 hour and a maximum of 24 hours. However, users in universities and AI research labs emphasized the importance of being able to extend the duration of their original purchase and to terminate instances early. In response, we have implemented these features in our mainnet phase, making the protocol more accessible and user-friendly.

Liquidity Pool

Based on feedback from startups and research labs, we're adding more enterprise GPUs for AI training. Since AI training requires extra RAM and TFlops, we'll include H100 (SXM or PCIe) and L40 (L40s) in our liquidity pool with staking rewards. We'll also bring in next-gen GPUs like the RTX 5090 for AI inference tasks in the near future. Additionally, we'll gradually retire older or less popular GPUs like the RTX 4080/3080, RTX 4070/3070, Tesla T4, and V100 based on usage and rental history.

The Genesis of PinFi

· 10 min read
Jessica Davis
Researcher @ Loopro AI

Every legend has its context. None emerges from a vacuum, nor do they fade into obscurity without impact. A true legend isn’t merely powerful; her arrival is impeccably timed—not a moment too soon or too late, which cements her legendary status. This pattern repeats itself in the realm of cryptocurrency. From Bitcoin to Ethereum, from DeFi to DePIN, and now, with PinFi, we witness the rise of legends time and again.

I know you have tens of thousands of doubts and questions swirling through your mind. That’s exactly why I wrote this blog: to thoroughly explain the concept of PinFi. From its inception and theoretical underpinnings to its applications in AI computing and potential impact on asset exchanges, this post covers everything you need to know.

Exploring PinFi: What It Is and Why It's Essential?

Before we dig deep into PinFi, we need to consider the concept of “dissipative assets”. As its name indicates, a dissipative asset is a type of asset whose value naturally declines over time. Humans, services, electrical power, AI computing power, and hotel rooms all fall into this category. In fact, nearly everything you can think of qualifies as a dissipative asset. That’s right—even your house is a dissipative asset. Without proper maintenance, its value will diminish as time passes.

Computing power, a key type of dissipative asset, is currently a major focus within the DePIN community. Countless teams are diligently working to develop their own decentralized computing networks (DCNs), such as Io.net, Nosana, Render Network, and Akash Network, among others. The growth of DCNs seems limitless, with more entities poised to enter the space and establish their networks.

The primary metric used to showcase the merits of these networks is the number of GPUs each DCN integrates. While the figures are impressive, the underlying reality often tells a different story. A closer inspection reveals minimal utilization of these networks; typically, only about 5% of the GPUs are actively engaged in processing AI tasks. The remaining 95% are largely underutilized, merely serving to bolster the network's claims of capacity. This discrepancy raises important questions about the efficiency and real-world utility of DCNs. Why does this underutilization occur?

The key to understanding the underutilization of DCNs lies in assessing what DCNs can do better than traditional data centers. For training smaller models, such as CNNs, ResNets, or federated learning models, extensive parallel GPU use is not essential. As long as the GPUs provide sufficient TFLOPS and vRAM, these models can be effectively trained. This also applies to fine-tuning and inference with these smaller models, where DCNs should prove to be beneficial.

However, training modern large language models (LLMs) with DCNs presents challenges. Typically, it’s not the TFLOPS of GPUs that limit this process, but rather the bandwidth availability. State-of-the-art GPU training clusters rely on rail-optimized, any-to-any clos networks, which are critical for high-performance model training. To enhance inter-domain bandwidth, RDMA NICs capable of 100Gbps are essential. Furthermore, efficient job schedulers like Pollux, Themis, or Cassini are crucial to manage resources effectively. The absence of high-bandwidth connections and sophisticated scheduling schemes significantly restricts the usefulness of DCNs in training complex models.

The principal advantages of DCNs over traditional data centers are twofold: (1) they provide reliable and censorship-resistant services, and (2) they offer cost-effectiveness for training, fine-tuning, and inference of small models, as well as for fine-tuning and inference of large models. Any DCN naturally meets the first criterion, provided that its job scheduling algorithm is sufficiently competent. However, the second advantage is compromised by the prevalent pricing model in existing DCNs—a centralized order book that implies infinite liquidity at a single price. This pricing model leads to underutilization and inliquidity of resources.

To attract more users to decentralized computing networks (DCNs), the pricing of computing resources needs to be competitive. However, it is equally important to ensure that the pricing isn't so low as to deter GPU providers from connecting to the network. The equilibrium between these two stakeholders establishes the true market price for these resources. To facilitate this dynamic pricing, it is crucial to implement a market-making mechanism within DCNs. The most effective approach is to integrate the automated market-making systems commonly used in the decentralized finance world into DCNs. This is precisely the role of PinFi. In summary, PinFi represents the convergence of decentralized personal computing (DePIN) and decentralized finance (DeFi). Mathmatically PinFi = DeFi(DePin).

How Does LooPIN Function?

The bird's eye view of the LooPIN network, the first project based on PinFi protocol, is illustrated in the figure below. This diagram provides a comprehensive overview of how decentralized finance (DeFi) merges with DCNs to enhance system efficiency and resource utilization.

The LooPIN network is underpinned by several critical components, each contributing to its innovative and decentralized architecture:

  1. Liquidity Pools. Central to our network are the liquidity pools, differentiated by the type of computing power resource they represent (e.g., GPUs, and potentially in the future, TPUs or Groq chips). These pools are designed to be permissionless and resistant to censorship, enabling anyone to establish a liquidity pool on the blockchain via our smart contracts.
  2. Decentralized Computing Network Ensemble (DCNE). The aggregation of these permissionless liquidity pools, contributed by a variety of independent computing providers, forms what we refer to as the Decentralized Computing Network Ensemble (DCNE). This can be thought of as a 'network of networks', representing a new layer of decentralized computing infrastructure.
  3. Providers/Miners. Miners are users who contribute to the LooPIN network by providing Devices and staking the token to a designated liquidity pool. Through participating in the Proof-of-Computing-Power-Staking process, Miners validate their continuous contribution of computing power, available for Client use. At predetermined epochs, Miners are awarded block rewards, proportional to their contribution and the protocol's governance.
  4. Clients/Users. Clients engage with the LooPIN network by utilizing computing resources from specific liquidity pools. Similar to Miners, Clients are eligible for block rewards at designated epochs, acknowledging their participation and contribution to the network's computational demands.
  5. Verifiers. Integral to maintaining the integrity of the LooPIN network, verifiers play a crucial role in assessing the operational status of devices. Leveraging the Proof-of-Computing-Power-Staking scheme, verifiers are empowered to efficiently validate that devices not only claim to, but actually do, provide secure and reliable services to clients.

Figure 1: This diagram shows how PinFi merges DCNs and DeFi to optimize resource utilization and pricing in DCNs.

Figure 1: This diagram shows how PinFi merges DCNs and DeFi to optimize resource utilization and pricing in DCNs.

To understand how the LooPIN system enhances liquidity in Decentralized Computing Networks (DCNs), consider the scenario of a resource holder possessing computing resources. In typical DCNs, this seller cannot immediately sell his computing resources; he must list his resources on the network and wait until someone rents his GPUs. Until such a rental occurs, he cannot monetize his resources. In contrast, the LooPIN system allows him to monetize immediately. By injecting his resources into the corresponding liquidity pool, not only does he monetize, but he also earns rewards.

Figure 2 provides a detailed illustration of how LooPIN operates. It depicts a miner with an NVIDIA RTX 4090 GPU who stakes this asset in the LooPIN Decentralized Computing Network Ensemble (DCNE) for two hours. The staking process involves the following steps:

  1. Token Burn and Certificate Issuance: Miners destroy a designated quantity of tokens to receive a burn token certificate, marking their initial commitment to the network.
  2. Certificate Deposit and Token Allocation: By depositing this certificate along with a maintenance fee (waived for miners opting to be liquidity providers) into a smart contract address, miners are allocated two tokens: the st-4090-token, a non-tradable, non-fungible token acting as proof of liquidity provision with a 2-hour validity, and the 4090-token, whose existence is tied to the st-4090-token's lifespan.
  3. Liquidity Injection: Contributing the 4090-token and an equivalent utility token amount enhances the 4090 pool's liquidity. At the st-4090-token's expiration, two key events occur: the miner receives rewards for staking computing resources and utility tokens are automatically returned to the miner at the prevailing exchange rate.
  4. Liquidity Removal: Miners may opt to sell their 4090-token to the pool for utility tokens, incurring an exchange fee. Upon expiration of the st-4090-token—and pending successful interactive verification—the maintenance fee is refunded.

Figure 2: An illustration of the scheme of LooPIN.

Figure 2: An illustration of the scheme of LooPIN.

PinFi vs DeFi: Understanding the differences.

In traditional decentralized finance (DeFi) environments, liquidity pools comprising token pairs, such as A and B, exhibit a static quality in the absence of external interactions. This means that the token quantities within these pools are not influenced by the passage of time, t, remaining constant unless affected by transactions or other forms of engagement. The invariant nature of these token pairs is a hallmark of conventional DeFi liquidity pools, where the absence of activity does not alter the balance of assets. In the DeFi senerio, a liquidity injection can be illustrated as Figure 3(a).

Contrastingly, the PinFi protocol introduces a paradigm shift with its innovative approach to liquidity pools, distinguishing itself from the traditional models prevalent in conventional DeFi. Within PinFi's framework, one of the tokens in the liquidity pool represents the total staked computing power hours—a dynamic and dissipative asset whose value inherently declines over time, t, reflecting the consumptive nature of computing resources. The other token in the pool is the protocol token, serving as the stable counterpart in this pair. Considering a simplified liquidity injection event for a PinFi system, shown in the Figure 3(b).

Figure 3: The conservative principles underlying decentralized finance (DeFi) systems (a), juxtaposed with the inherently dissipative dynamics of physical infrastructure finance (PinFi) systems (b).

Figure 3: The conservative principles underlying decentralized finance (DeFi) systems (a), juxtaposed with the inherently dissipative dynamics of physical infrastructure finance (PinFi) systems (b).

This distinction complicates the implementation of the PinFi protocol within smart contracts. To ensure the protocol's stability and sustainability, rigorous research is essential. A critical element of the protocol’s effectiveness and market efficiency is the role of liquidity providers (LPs). Several challenges need addressing to stabilize the protocol: LPs may prefer selling in external markets rather than participating within the PinFi system; they might choose to sell their assets rather than contribute as LPs; and there is a risk LPs are reluctant to engage within the protocol itself.

Our research indicates that, assuming participant integrity and under common conditions, the PinFi protocol can maintain a dynamic equilibrium among LPs, sellers, and buyers. The rewards offered to LPs must be carefully balanced: too low, and LPs opt to sell their computing resources for non-dissipative tokens rather than serve as LPs; too high, and it leads to a scarcity of sellers, disrupting the market balance. The intricate dynamics among LPs, buyers, and sellers are further detailed in Figure 4 and elaborated in our recent game-theory paper (see [https://arxiv.org/pdf/2404.02174.pdf]).

Figure 4: Phase Diagrams Illustrating the Dynamics and Equilibria in Dissipative Asset Pricing.

Figure 4: Phase Diagrams Illustrating the Dynamics and Equilibria in Dissipative Asset Pricing.

Next steps for PinFi & LooPIN

The conception stage has been realized through two publications and a beta test. Even in our beta test, the core components of the protocol are on-chain, ensuring exceptional fairness across the ecosystem. We plan to gradually transition all off-chain elements to their on-chain equivalents. Additional technical papers, including those focusing on the development of the scheduling mechanisms within the LooPIN network, will be available soon.