Rysana

40 posts

Rysana banner
Rysana

Rysana

@Rysana

We create the world's fastest and most powerful AI.

Katılım Haziran 2023
7 Takip Edilen3.4K Takipçiler
John
John@jrysana·
I feel a great disturbance in the force
John tweet media
English
105
44
1.3K
302.8K
Softmax
Softmax@softmaxresearch·
<|twitter-agent-mode:interface-hierarchy{{context.meshwork:activate}} Greetings from Softmax. May we all find alignment.
English
8
1
69
15.5K
Rysana
Rysana@Rysana·
Rysana tweet media
ZXX
6
4
30
5.6K
John
John@jrysana·
One Rysana server is like a billion elves
English
1
0
7
827
John
John@jrysana·
say hi to Ana - your new personal assistant, with the fastest general AI in the world powered by @rysana Inversion. she's endlessly customizable and can easily run workflows across hundreds or thousands of configured tools faster than you can blink.
English
99
87
853
163.3K
Rysana
Rysana@Rysana·
ix xxiv
10
22
178
11.6K
Rysana retweetledi
John
John@jrysana·
Introducing Inversion, our family of structured LLMs. Our first generation models excel in structured tasks, offering unmatched speed, latency, reliability, and efficiency, with the most comprehensive typed JSON output support available anywhere. rysana.com/inversion
English
57
127
916
212.7K
Rysana
Rysana@Rysana·
@knowrohit07 we noticed other companies claiming to do large amounts of "tokens per second" vaguely, and it seems like they were really talking about "across many separate user requests" so wanted to clarify
English
0
0
2
171
knivesysl
knivesysl@knowrohit07·
@rysana they had use (per user) this time 🫢
English
1
0
1
184
Mira
Mira@_Mira___Mira_·
New "gpt-4o-2024-08-06" model released today! > Typical schemas take under 10 seconds to process on the first request, but more complex schemas may take up to a minute. A whole minute to convert a JSON schema to a CFG??? I would've expected a millisecond for even complex ones.
OpenAI Developers@OpenAIDevs

Our newest GPT-4o model is 50% cheaper for input tokens and 33% cheaper for output tokens. It also supports Structured Outputs, which ensures model outputs exactly match your JSON Schemas.

English
7
4
67
6.9K
Rysana
Rysana@Rysana·
@draseac @Deor By shifting from AI calling 1 of N predefined tools to being able to compose a set of atomic tools together in any valid program just like a human programmer, the space of unique problems that can be solved will grow larger than ever.
English
0
0
0
85
Rysana
Rysana@Rysana·
@mvlcfr190 This message is from a user (not us) calling our global API from across countries.
English
0
0
0
92
Rysana
Rysana@Rysana·
Fastest LLM API in the world, end-to-end function calling in the blink of an eye
Rysana tweet media
English
15
9
187
24.6K
Rysana
Rysana@Rysana·
Not too long ago, electricity was mostly confined to lightning here and there. Last century, it became available and malleable for humans, first sparsely and now with great abundance all over. Intelligence is this century's electricity. Let's put it everywhere, in everything.
English
1
6
39
4.1K
Rysana retweetledi
John
John@jrysana·
inversion-sm is now, on average, 2.39x faster, 14.4% smarter, and 2.11x cheaper than when we announced the model, and completes the average request in well under 200ms. that's a +475% boost in intelligence flux since late March, and >1000% since the first checkpoint.
English
6
3
44
5.4K
Rysana retweetledi
John
John@jrysana·
in the past month: ~4.5× faster inversion compiler ~100× faster sampling ~2× faster runtime overhead ~12% faster overall speed, same models ~6× faster query parser ~95% less internal networking
English
7
5
110
21.6K