Liz Corrigan, our Head of Engineering raised eyebrows at STAC Spring Summit in London yesterday. Liz proudly announced VOLLO achieved <5.1us latency in a STAC benchmark for #ML inference in #fintech using LSTM models. Unrivalled latency and easy to adopt. myrtle.ai/fintech
Lowest latency; It’s official! @stacresearch has released its report confirming that VOLLO achieved the lowest latency in their #fintech#ML inference benchmark tests. Leverages great technology from @intel & @BittWareInc. Myrtle.ai/fintech.
Excited that the performance of VOLLO, our #ML inference appliance for fintech, will be officially revealed by @stacresearch in London today. Leverages great technology from @intel & @BittWareInc. Myrtle.ai/fintech
If you missed the live event, here's another chance to see Sam Davis' keynote on low latency speech synthesis at the Energy Efficient Machine Learning & Cognitive Computing workshop. #machinelearning#AIlnkd.in/d7uY6yk
MLCommons launched today and we’re proud to be a founding member of this new open engineering consortium. Great minds working together to accelerate innovation in #MachineLearning. myrtle.ai/press/myrtle-a…@commons_ml
We're proud to have delivered 8x performance for WaveNet compared with a GPU using Intel's new Stratix 10 NX FPGA. Check out the demo video released today! #MachineLearningblogs.intel.com/psg/wavenet-ne…
We are currently looking for a Machine Learning Engineer and a Systems Operations Engineer to join our team! To find out, please visit: myrtle.ai/careers/. Please send your CV to careers@myrtle.ai if any of these positions suit you.
#AI#MachineLearning#Careers
We’re very excited to launch our new, innovative solution to a major bottleneck in #ML recommendation models. SEAL has the potential to save hyperscale and tier one data center companies hundreds of millions of dollars every year. myrtle.ai/press/myrtle-a…#AI
Intel has just announced its first AI-optimized FPGA – the Intel® Stratix® 10 NX FPGA – to address the rapid increase in AI model complexity
intel.ly/3egXCje
We are always looking for brilliant minds to join our growing team at Myrtle.ai. You can view current opportunities and find out more here: #positions" target="_blank" rel="nofollow noopener">myrtle.ai/careers/#posit… #AI#MachineLearning#Careers
Join our growing team of experts! If you’re passionate about AI and want to be a part of a flourishing company - we want to hear from you! Please send your CV to careers@myrtle.ai if any of the positions below suit you. myrtle.ai/careers/#AI#machinelearning#Careers
Come and see our MAU™ RNN Inference Accelerator at the @XilinxInc Technology Day in Tel Aviv, Israel on 25th February. #top" target="_blank" rel="nofollow noopener">…erating-the-future-event.events.co.il/home#top#machinelearning#xilinx2020#ai
At #xdf2019 today Myrtle announced an exciting new RNN accelerator for the Xilinx Alveo U250 accelerator card. Designed to deliver maximum throughput in latency-bound applications and demonstrated using an industry-standard STT benchmark. myrtle.ai/press/myrtle-a…
CIFAR-10 to 94% in 26 SECONDS on a single GPU anyone? Final post of our How To... series: @dcpage3 opens a bag of tricks to drive training times ever further down myrtle.ai/how-to-train-y… . One for everyone involved with #MLPerf benchmarks and #AI#ML