pls seed

704 posts

pls seed banner
pls seed

pls seed

@YYYYOOOO77

pls seed

انضم Ağustos 2024
115 يتبع11 المتابعون
pls seed
pls seed@YYYYOOOO77·
@antirez Microsoft made that and nobody bought it. Surface duo
English
1
0
4
428
antirez
antirez@antirez·
Important features: 1. It has *two* screens, it is not a foldable screen. So there is no problem with folding angle and when closed it will have the two screens facing in the opposite direction. 2. The aspect ratio is different, it is a taller phone, so when open you can type with two hands. 3. The software and keyboard are designed for this use case. It has a native SSH client, a Linux subsystem, ...
English
7
0
30
6.5K
antirez
antirez@antirez·
I want something like that (made with AI image generation, not a real thing).
antirez tweet media
English
97
3
251
39.5K
pls seed
pls seed@YYYYOOOO77·
@neuralease @scaling01 Yeah, just 2 days ago I lost hours where opus 4.7 debuging just shit itself because it did not read enough surce code. People keep forgetting that china models are so cheap and good enough now that you can just spend more tokens and for most of the work it will be enough.
English
0
0
1
82
pls seed
pls seed@YYYYOOOO77·
@neuralease @scaling01 I use it daily literally daily, have Max 5x and pro lite together with Kimi and glm (now that it's fixed) 8 months my ass.
English
1
0
1
375
Neuralease
Neuralease@neuralease·
@scaling01 Not on all tasks but: KimiK2.6, Mimo-V2.5-Pro, Deepseek V4 Pro Max and GLM 5.1 are comparable, even exceeding Sonnet 4.6 in many cases. I personally use KimiK2.6, it's free without you having to deal with "adaptive thinking" bs. I don't say this in a mean tone, just mad at anth
English
2
1
160
7.1K
pls seed
pls seed@YYYYOOOO77·
@Presidentlin oh, come on, all the problems that where there before, none of them are gone now, he just changed his opinion. can happen. when there is a lot of money/influence/ego on line i tend to not believe it.
English
0
0
0
29
pls seed
pls seed@YYYYOOOO77·
@joshmanders @thdxr @yiannis__p everybody literally complained about github doing. and i dont buy the to train open source models. then make the dataset public from day 1. otherwise its just non binding promise.
English
0
0
0
115
Josh
Josh@joshmanders·
@thdxr @yiannis__p Yes and that's kind of the point. The opposite is you're preying on people who are forgetting to opt out. Telemetry should be always opt-in, if that becomes an un-meaningful way to lower costs, then find a better way to lower costs.
English
3
0
10
659
dax
dax@thdxr·
opencode go is currently zero data retention however we can increase limits and make it all more sustainable if we collect data to train future open source models you can opt out of this - is that something you'd be ok with?
English
237
16
621
84.5K
pls seed
pls seed@YYYYOOOO77·
@thdxr @yiannis__p a come on, you know that its not right and you still want to do it. why did you ask then.
English
1
0
1
416
dax
dax@thdxr·
@yiannis__p people always say this but you can understand that it won't be meaningful enough to actually increase limits the opt in is paying $10 for $60 of inference
English
6
0
180
8.7K
Tushant Suneja
Tushant Suneja@tushant_suneja·
@leonabboud we ran into similar issues with local hosting and that's why we built Snowy AI, a secure cloud interface that makes it easy to use these tools without the hassle of setup and variable costs
English
1
0
0
93
Leon Abboud
Leon Abboud@leonabboud·
Anthropic banning OpenClaw got me to downgrading my $200/m subscription and I'm now back on the Plus plan for OpenAI. I still think it was a massive fumble loosing AI power users to competition. No one likes pay as you go usage, people would rather have a fixed plan they're on. Just like no one would like paying per day to go to the gym. Even though most would likely save money paying per day than they would paying for the month. If OpenClaw was costing Anthropic a lot, why couldn't they create a new plan called "AI agents" that was slightly more expensive but was a fixed monthly rate. Most, including myself would have been willing to pay a bit more for a plan specifically designed for AI agents.
English
11
0
53
2.5K
pls seed
pls seed@YYYYOOOO77·
@0xSero lol we are inventing IDE backwards.
English
1
0
5
831
0xSero
0xSero@0xSero·
Warp + Droid is almost perfect. Needs a browser, I need browsers everywhere, please Warp. Still, can't believe they open sourced it. Very top dawg of them.
0xSero tweet media
English
23
19
466
21.1K
pls seed
pls seed@YYYYOOOO77·
@SamuelWayne0 @haider1 i find the 3.6 27b with reasoning disabled better, on a 5090 i get like 1.8 ttft and 119tps with 155 context size. this thing rips. thats just vllm with a couple of tweaks and sakamakismile/Qwen3.6-27B-Text-NVFP4-MTP
English
0
0
0
101
SamWayne
SamWayne@SamuelWayne0·
@haider1 What are your Concrete use cases? & how many TPS are you getting?
English
1
0
0
634
Haider.
Haider.@haider1·
google gemma-4 31b is such an underrated beast model it's probably the first consumer hardware-sized model that is genuinely usable for simple conversations, not just specific tasks to put that into context, i prefer it over the free-tier chatgpt model and you should also
English
23
10
225
13.7K
pls seed
pls seed@YYYYOOOO77·
@burkov I used it in opencode, works ok, not great not terrible. If i would get a bunch more tokens to try it i would not complain. Feel its better in writing business models then coding.
English
0
0
3
1.1K
BURKOV
BURKOV@burkov·
Did anyone try to use Gemini 3.1 Pro with Codex as the harness? Is Antigravity the problem with using Gemini for agentic coding, or is it the LM?
English
50
1
322
45.3K
Rohan Varma
Rohan Varma@TheRohanVarma·
Many times a day now, people much smarter than me tell me they are Codex-pilled and that GPT-5.5 was a watershed moment for them. Engineers keep telling me the Codex App is the first interface that got them to leave the terminal agents behind. The Codex App is a fundamental shift in the way I work. I can't even imagine what I was doing before using the Codex App, but it definitely wasn't pretty. Check it out and let us know how to make it even better :)
English
94
19
685
50.8K
pls seed
pls seed@YYYYOOOO77·
@basedjensen I hate it to, but is the alternative to ship a bunch of cli-s to normies ?
English
0
0
0
200
mrciffa
mrciffa@davideciffa·
@YYYYOOOO77 1.5x is a pretty bad speculative decoder, our luce dflash + ddtree has a x3-4 speed up and we are working to make it even better
English
1
0
0
80
mrciffa
mrciffa@davideciffa·
Fast inference needs heterogeneous hardware, a fast small machine with high tflops, high memory bandwidth and a slower bigger one where hosting the main target model. It is way better than a dgx and the cost is similar
Sandro@pupposandro

Testing a Ryzen Strix Halo 128gb + RTX 3090 24gb setup atm. On paper it’s perfect: the 3090 handles speed, the Strix Halo handles memory, you can run everything well including dense or bigger models. The catch is connecting them together cleanly. Still working on that. Cost is ~ $4,000. Still cheaper than the DGX.

English
1
0
15
1.6K
pls seed
pls seed@YYYYOOOO77·
@davideciffa Okay, I understand it now. But you need a great drafter and even then you can expect 1.5 x maybe., what is the base tps on a strix ? So the outcome doesn't look promising. You have to do a lot of work.
English
1
0
0
41
mrciffa
mrciffa@davideciffa·
@YYYYOOOO77 I mean something else, the main model lives in the slow memory (270gbps), you dont split it. The fast drafters (for speculative decode/speculative prefill) live in the rtx . You can think about it like the poor gpu man version of chips with gigantic SRAM like groq
English
1
0
0
75
pls seed
pls seed@YYYYOOOO77·
@Yampeleg It's still a huge mess. too much CPU if you have multiple ongoing sessions randomly using the built-in browser use or trying to get to the Chrome MCP directly. The positive is it creates a follow-ups automatically and the auto-compacting can be great, but also can be a huge miss
English
0
0
3
1.4K
Yam Peleg
Yam Peleg@Yampeleg·
I was not expecting the Codex App to be even better than using the terminal. Highly recommend everyone to try. If you are on Linux just tell GPT-5.5-xhigh to “find a way to get it, it’s known to be easy”
English
36
17
413
408.6K
World of Statistics
World of Statistics@stats_feed·
International tourism (number of annual arrivals): 🇫🇷 France: 117.1m 🇵🇱 Poland: 88.5m 🇲🇽 Mexico: 51m 🇺🇸 USA: 45m 🇹🇭 Thailand: 39.9m 🇮🇹 Italy: 38.4m 🇨🇿 Czechia: 37.2m 🇪🇸 Spain: 36.4m 🇨🇦 Canada: 32.4m 🇭🇺 Hungary: 31.6m 🇨🇳 China: 30.4m 🇭🇷 Croatia: 21.6m 🇮🇳 India: 17.9m 🇹🇷 Turkey: 15.9m 🇩🇰 Denmark: 15.6m 🇩🇪 Germany: 12.4m 🇬🇧 UK: 11.1m 🇦🇷 Argentina: 7.4m 🇷🇺 Russia: 6.3m 🇧🇷 Brazil: 6.3m 🇳🇬 Nigeria: 5.2m 🇯🇵 Japan: 4.1m 🇮🇩 Indonesia: 4m 🇸🇪 Sweden: 1.9m 🇦🇺 Australia: 1.8m 🇳🇴 Norway: 1.4m 🇨🇺 Cuba: 1m 🇵🇰 Pakistan: 0.9m 🇫🇮 Finland: 0.9m 🇲🇻 Maldives: 0.55m 🇮🇸 Iceland: 0.5m 🇻🇪 Venezuela: 0.4m 🇲🇩 Moldova: 0.03m Source: Yearbook of Tourism Statistics, Compendium of Tourism Statistics and data files, UN Tourism. Data from 2020 or latest available. France is 2020, Poland is 2019 for example.
English
358
291
3.6K
723.1K