Tripling product revenues, comprehensive developer tools, and scalable inference IP for vision and LLM workloads, position Quadric as the platform for on-device AI. ACCELERATE Fund, managed by BEENEXT ...
Responses to AI chat prompts not snappy enough? California-based generative AI company Groq has a super quick solution in its LPU Inference Engine, which has recently outperformed all contenders in ...
Predibase's Inference Engine Harnesses LoRAX, Turbo LoRA, and Autoscaling GPUs to 3-4x Throughput and Cut Costs by Over 50% While Ensuring Reliability for High Volume Enterprise Workloads. SAN ...
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now DeepSeek’s release of R1 this week was a ...
The burgeoning AI market has seen innumerable startups funded on the strength of their ideas about building faster, lower-power, and/or lower-cost AI inference engines. Part of the go-to-market ...
Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation. Maia 200 is an AI inference powerhouse: an ...
Inference, what happens after you prompt an AI model like ChatGPT, has taken on more salience now that traditional model scaling has stalled. To get better responses, model makers like OpenAI and ...
Tripling product revenues, comprehensive developer tools, and scalable inference IP for vision and LLM workloads, position Quadric as the platform for on-device AI. BURLINGAME, Calif., Jan. 14, 2026 ...
Quadric Chimera (TM) processor IP is designed for this reality. Unlike fixed-function NPUs locked to today's model architectures, Chimera is fully programmable: it runs any AI model--current or future ...
BURLINGAME, Calif. -- Quadric®, the inference engine that powers on-device AI chips, today announced an oversubscribed $30 million Series C funding round, bringing total capital raised to $72 million.
Results that may be inaccessible to you are currently showing.
Hide inaccessible results