sudocode

6 posts

sudocode

sudocode

@sudocode_ai

We use AI to help you create software better, faster. 🐦‍⬛

Katılım Haziran 2023
3 Takip Edilen28 Takipçiler
sudocode
sudocode@sudocode_ai·
Love seeing the crazy ways we're trying to extract new utilities from LLMs. For those who are less familiar, relation extraction means defining the relationship between any two entities in a text. Manually doing this is super expensive, but with LLMs we can do this instantly!
Jerry Liu@jerryjliu0

The `rebel-large` model is awesome for relation extraction 🔗 Paired with CUDA, it’s blazing fast ⚡️. With @llama_index 🦙, we can now build a knowledge graph over any text data super quickly! 🕸️ Full Colab notebook showing how you can use it: 
 colab.research.google.com/drive/1G6pcR0p…

English
0
0
1
537
sudocode
sudocode@sudocode_ai·
We at sudocode are excited for all the new LLMs being released! What kinds of things do you plan on building?
AK@_akhaliq

Meta releases Llama 2: Open Foundation and Fine-Tuned Chat Models paper: ai.meta.com/research/publi… blog: ai.meta.com/llama/ develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.

English
0
0
2
301
sudocode
sudocode@sudocode_ai·
This goes along with our intuition that the ever-growing context windows aren't a one size fits all solution. sudocode uses specialized agents that compress context in order to achieve better results. What are some tactics y'all are trying?
English
0
0
1
168
sudocode
sudocode@sudocode_ai·
Interested in formatting your OpenAI outputs? One common issue people have around code outputs is unwanted markdown. We've found that few-shot examples can help! For example, add this to your prompt: Bad Response: ``` print("hello_world") ``` Good: print("hello_world")
English
0
1
2
181
sudocode
sudocode@sudocode_ai·
Shoutout to @posthog for one of the easiest setups ever! Took us 5 minutes and now we have session replays, insights, and more all for free. Highly recommend new companies to instrument analytics.
sudocode tweet media
English
0
0
2
119
sudocode
sudocode@sudocode_ai·
One nifty tip for making the most use of your rate limits: If your application depends on openai's gpt-4, gpt-3.5-turbo, one way to get more bang for your buck is to round-robin rotate through each model. Each model (including 32k, and snapshots), has a different rate limit!
English
1
0
2
122