

Symflower
517 posts

@symflower
Your virtual coding assistant who spots errors and unexpected behavior, does routine tasks for you and generates unit tests with meaningful values in real time.





Insights of analyzing >100 LLMs for the DevQualityEval v1.0 (generating quality code) in latest deep dive - 👑 Google’s Gemini 2.0 Flash Lite is the king of cost-effectiveness (our previous king OpenAI’s o1-preview is 1124x more expensive, and worse in score) - 🥇 Anthropic’s Claude 3.7 Sonnet is the functional best model (with help) … by far - 🏡 Qwen’s Qwen 2.5 Coder is the best model for local use - Models are on average getting better at code generation, especially in Go - Only one model is on-par with static tooling for migrating JUnit 4 to 5 code - Surprise! providers are unreliable for days for new popular models - Let’s STOP the model naming MADNESS together: we proposed a convention for naming models - We counted all the votes, v1.1 will bring: JS, Python, Rust, … - Our hunch with using static analytics to improve scoring continues to be true All the other models, details and how we continue to solve the "ceiling problem" in the deep dive: 👇🧵 (now with interactive graphs 🌈) Looking forward to your feedback :-)


OpenAI's o1-preview is the king 👑 of code generation but is super slow and expensive 😱 This and other insights of analyzing >80 LLMs in the deep dive blog post from the DevQualityEval v0.6 for generating quality code 👇 - OpenAI’s o1-preview and o1-mini are slightly ahead of Anthropic’s Claude 3.5 Sonnet in functional score, but are MUCH slower and chattier. - DeepSeek’s v2 is still the king of cost-effectiveness, but GPT-4o-mini and Meta’s Llama 3.1 405B are catching up. - o1-preview and o1-mini are worse than GPT-4o-mini in transpiling code - Best in Go is o1-mini, best in Java GPT4-turbo, best in Ruby o1-preview Please support our work for the community by liking and sharing this post! 🙏 All the details and how we will solve the "ceiling problem" in the deep dive symflower.com/en/company/blo… (2x the content as the previous one!)












