Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

Speedata Secures $44M Series B to Challenge GPUs with Purpose-Built Analytics Processor

8:38 PM   |   03 June 2025

Speedata Secures $44M Series B to Challenge GPUs with Purpose-Built Analytics Processor

Speedata Secures $44M Series B to Challenge GPUs with Purpose-Built Analytics Processor

In the rapidly evolving landscape of big data and artificial intelligence, the demand for faster, more efficient data processing is insatiable. Traditional computing architectures, often reliant on general-purpose processors or hardware adapted from other uses, are increasingly struggling to keep pace with the sheer volume and complexity of modern analytical workloads. This challenge has spurred innovation in specialized hardware, and at the forefront of this movement is Speedata, a Tel Aviv-based startup that has just announced a significant milestone in its journey.

Speedata has successfully raised a $44 million Series B funding round, propelling its total capital raised to an impressive $114 million. This substantial investment underscores strong investor confidence in Speedata's vision and its core technology: the Analytics Processing Unit (APU). Unlike processors designed for general computing tasks or graphics rendering, Speedata's APU is purpose-built from the ground up to accelerate the specific demands of big data analytics and AI workloads.

The Need for Specialized Hardware in the Age of Big Data

For decades, the backbone of data analytics has been the standard central processing unit (CPU). While CPUs are versatile and capable of handling a wide range of tasks, they were not architected with the unique characteristics of large-scale data processing in mind. Modern analytics involves sifting through massive datasets, performing complex queries, transformations, and statistical analyses, often requiring highly parallel operations and efficient memory access patterns that differ significantly from typical computational tasks.

More recently, graphics processing units (GPUs), initially developed for rendering graphics in video games and visual applications, have been adapted for data-parallel computing. Their architecture, featuring thousands of smaller cores optimized for parallel tasks, proved surprisingly effective for certain types of AI training workloads, particularly neural networks. This led to GPUs becoming the dominant processor for AI training and gaining traction in some data analytics tasks.

However, as Speedata CEO Adi Gelvan points out, even GPUs have limitations when it comes to pure data analytics. "For decades, data analytics have relied on standard processing units, and more recently, companies like Nvidia have invested in pushing GPUs for analytics workloads," Gelvan stated in an interview. "But these are either general-purpose processors or processors designed for other workloads, not chips built from the ground up for data analytics."

Gelvan emphasizes that the fundamental bottlenecks in data analytics occur at the computing level, and these bottlenecks are distinct from those encountered in graphics rendering or even many AI training tasks. Data analytics often involves complex data movement, filtering, sorting, joining, and aggregation operations that can be inefficient on architectures optimized primarily for floating-point matrix multiplications, which are common in AI training.

Introducing the Analytics Processing Unit (APU)

Speedata's answer to this challenge is the APU. The core idea is to create a processor specifically tailored to the patterns and demands of data analytics. This involves designing the silicon architecture, memory hierarchy, and instruction set to efficiently handle the types of operations prevalent in analytical queries and data pipelines.

The company's roots lie deep in academic research. Speedata was founded in 2019 by six individuals, some of whom were pioneers in the development of Coarse-Grained Reconfigurable Architecture (CGRA) technology. CGRA represents a class of parallel computing architectures that offer a balance between the flexibility of general-purpose processors and the efficiency of fixed-function hardware (like ASICs). These architectures consist of arrays of processing elements that can be configured or programmed at a coarser granularity than FPGAs, allowing them to be adapted to different algorithms and data flows more efficiently than traditional CPUs or even GPUs for certain tasks.

The founders collaborated with experts in ASIC (Application-Specific Integrated Circuit) design to translate their research insights into a practical, high-performance chip. Their initial motivation stemmed from observing that complex data analytics workloads often required massive clusters of hundreds of servers when run on general-purpose processors. They believed that a single, dedicated processor could perform these tasks significantly faster and with far greater energy efficiency.

"We saw this as an opportunity to put our decades of research in silicon into transforming how the industry processes data," Gelvan explained, highlighting the long-term vision behind the company's formation.

Investor Confidence and Strategic Backing

The $44 million Series B round was led by existing investors, signaling strong continued support for Speedata's progress. These investors include prominent venture capital firms such as Walden Catalyst Ventures, 83North, Koch Disruptive Technologies, Pitango First, and Viola Ventures. Their repeated investment indicates satisfaction with the company's development milestones since its previous funding rounds.

Adding further weight to the investment are strategic investors with deep ties to the semiconductor and high-performance computing industries. These include Lip-Bu Tan, who is not only the CEO of Cadence Design Systems (a leader in electronic design automation) but also a managing partner at Walden Catalyst Ventures. Another key strategic investor is Eyal Waldman, the co-founder and former CEO of Mellanox Technologies, a company that became a major player in high-performance networking and was acquired by Nvidia.

The participation of figures like Tan and Waldman is particularly significant. Their involvement suggests that experienced industry veterans recognize the potential of Speedata's technology to address critical challenges in the data infrastructure market. Their expertise and network could prove invaluable as Speedata moves towards commercialization and market adoption.

Targeting Apache Spark and Aiming for Market Standard

Speedata's initial focus for its APU is on accelerating workloads running on Apache Spark, one of the most widely used unified analytics engines for large-scale data processing. Spark is known for its speed and versatility but can still be computationally intensive, especially on massive datasets. By targeting Spark, Speedata aims to provide immediate, tangible performance benefits to a large existing user base.

However, the company's ambitions extend far beyond a single platform. According to Gelvan, the roadmap includes supporting every major data analytics platform. The ultimate goal is audacious: to establish the APU as the standard processor for data processing.

"We aim at becoming the standard processor for data processing — just as GPUs became the default for AI training, we want APUs to be the default for data analytics across every database and analytics platform," Gelvan articulated, outlining a vision that positions Speedata as a potential foundational layer for future data infrastructure.

Achieving this goal would require not only demonstrating superior hardware performance but also building a robust software ecosystem around the APU, enabling seamless integration with existing data platforms and tools. This is a significant undertaking, but the substantial funding provides the resources necessary to pursue this ambitious objective.

From Concept to Silicon: Demonstrating Performance

A critical milestone mentioned by Speedata is the finalization of the design and manufacturing of its first APU, which occurred in late 2024. This moves the company beyond theoretical models and simulations to having tangible, working hardware.

Gelvan highlighted the progression: "We've moved from concept to testing on a field-programmable gate array (FPGA), and now we are proud to say we have working hardware that we are currently launching." FPGAs are reconfigurable hardware platforms often used for prototyping and testing chip designs before committing to expensive mass production of ASICs. Having working silicon is a major de-risking event for a hardware startup.

While Speedata has not yet publicly named the companies testing its APU, Gelvan indicated that they have a number of large enterprises evaluating the technology. This suggests that potential customers are seeing promising results in their own environments.

The startup has shared a specific case study to illustrate the potential impact of its APU. In a pharmaceutical workload, Speedata claims its APU completed a task in just 19 minutes. This same workload, when run on a non-specialized processing unit, reportedly took 90 hours. This represents a dramatic 280x speed improvement. While a single case study doesn't tell the whole story, such a significant performance leap, if repeatable across a range of relevant workloads, could be a compelling value proposition for data-intensive organizations.

The official public launch of the APU is scheduled for the Databricks Data & AI Summit in the second week of June. This event, a major gathering for the data and AI community, provides an ideal platform for Speedata to publicly showcase its technology for the first time and demonstrate its capabilities to a wide audience of potential customers and partners.

The Competitive Landscape and Speedata's Position

The market for data processing hardware is competitive and includes several types of players:

  • **Traditional CPU Vendors:** Companies like Intel and AMD continue to improve their general-purpose processors and add features to enhance data processing capabilities.
  • **GPU Vendors:** Nvidia, in particular, has heavily invested in software ecosystems (like CUDA and libraries like cuDF) to make GPUs more accessible and effective for data analytics, building on their success in AI training.
  • **Other Specialized Accelerators:** Various startups and established companies are developing other types of accelerators, including those based on FPGAs or custom ASICs, targeting specific workloads like databases, networking, or machine learning inference.
  • **Cloud Provider Hardware:** Major cloud providers (AWS, Google Cloud, Microsoft Azure) are also developing their own custom silicon optimized for their infrastructure and workloads.

Speedata is positioning itself not just as another accelerator but as a creator of a new class of processor specifically for analytics. By focusing on the unique characteristics of data analytics workloads and leveraging its CGRA expertise, Speedata aims to carve out a distinct niche and offer performance and efficiency advantages that general-purpose or graphics-oriented processors cannot match for these specific tasks.

The challenge for Speedata will be to build the necessary software layers and developer tools to make its APU easy to adopt and integrate into existing data stacks. Hardware is only part of the solution; the software ecosystem is crucial for widespread adoption.

Looking Ahead: Scaling Go-to-Market and Future Impact

With working hardware in hand and a growing pipeline of enterprise customers, Speedata is now focused on scaling its go-to-market operations. The Series B funding will be critical for expanding sales, marketing, and support teams, as well as continuing research and development for future generations of the APU.

The success of Speedata's APU could have significant implications for organizations dealing with large volumes of data. Faster analytics mean quicker insights, more agile business operations, and the ability to tackle previously intractable problems. Reduced reliance on massive server clusters could also lead to lower infrastructure costs and improved energy efficiency, which is becoming increasingly important for data centers.

The journey from a research concept to becoming the "default processor for data analytics" is a long one, but with substantial funding, proven technology in hand, and strategic backing from industry leaders, Speedata appears well-positioned to make a significant impact on the future of big data processing.

The upcoming public demonstration at the Databricks summit will be a key moment, offering the industry its first detailed look at the performance and capabilities of Speedata's purpose-built APU.