DatSciX

266 posts

DatSciX

DatSciX

@DatSciX

AI Engineer/Architect | Data Scientist | Founder & CEO of DatSciX | eDiscovery Expert

Inscrit le Şubat 2021
51 Abonnements18 Abonnés
DatSciX
DatSciX@DatSciX·
@WellsJorda89710 So, his constitutional rights were violated on multiple levels....
English
2
0
1
1.6K
Reverend Jordan Wells
Reverend Jordan Wells@WellsJorda89710·
TRENDING: Several outspoken Christian #NFL stars are showing public support for Jaden Ivey after the Chicago Bulls waived him yesterday over his religious comments on LGBTQ issues and Pride Month. The players who posted in support: • TreVeyon Henderson (Patriots RB) • Dez Bryant • Kam Curl (Rams S) • Tucker Kraft (Packers TE) • Azareye'h Thomas Many shared Bible verses emphasizing faith, persecution, and standing for righteousness — including Matthew 5:10: “Blessed are those who are persecuted for righteousness’ sake, for theirs is the kingdom of heaven.” This has sparked heated debate about faith, free speech, and sports leagues. What do you think — standing on principle or crossing a line? 👇 #JadenIvey #NFL #ChristianAthletes #FaithInSports
Reverend Jordan Wells tweet mediaReverend Jordan Wells tweet media
English
29
338
2.1K
67.3K
DatSciX
DatSciX@DatSciX·
@burkov One has certainly murdered vastly more then the other.
English
0
0
0
11
BURKOV
BURKOV@burkov·
It's so dumb that, in order to support individual liberty, limited government, free capitalism, and meritocracy, you must also support gun fetishism and oppose a woman's right to her own body. It's so dumb that, in order to support the separation of church and state and accessible public education, you must also support the crazy woke shit. A two-party system where none of the parties represent you is no better than a single-party system or a monarchy.
English
13
1
22
3K
Ahmad
Ahmad@TheAhmadOsman·
Qwen 3.5 27B (Dense) with Hermes Agent is REALLY GOOD
English
60
50
1K
123K
ayush🔮👨‍💻🔮
ayush🔮👨‍💻🔮@ayushagarwal027·
🦀 Did you know Rust is quietly making its way into Machine Learning? Most ML engineers default to Python and for good reason. But there's a growing movement pushing the boundaries with Rust, and arewelearningyet.com is your go-to tracker for that ecosystem. Here's why it matters: ✅ Rust offers near-C performance with memory safety : no GC pauses, no segfaults ✅ Zero-cost abstractions mean you don't sacrifice expressiveness for speed ✅ Projects like linfa and smartcore are making classical ML algorithms accessible in pure Rust ✅ Neural networks, NLP, GPU computing, reinforcement learning the ecosystem is growing fast Is it production-ready for ML at scale? Not quite yet. The site itself says it's "ripe for experimentation" but the trajectory is exciting. If you're a Rust developer curious about ML, or an ML engineer who wants more control over performance, this is the moment to start exploring. The community is small but active, and there's real opportunity to contribute and shape where this goes. 👉 arewelearningyet.com #Rust #MachineLearning #RustLang #MLEngineering #OpenSource
ayush🔮👨‍💻🔮 tweet media
English
3
23
168
6.1K
Ahmad
Ahmad@TheAhmadOsman·
New Tenstorrent cluster hot from the kitchen > 1TB of VRAM > 3TB DDR5 RAM > 32TB SSD Storage New product, will share more later P.S. Can you find the cat in the picture?
Ahmad tweet media
English
121
29
593
80.1K
Hamzé 🦀
Hamzé 🦀@Hamzeml·
Do NOT try to “learn Rust first”. That’s a trap. I’m launching a workshop series where we build tiny ML from scratch in Rust. No ML background needed. No Rust required. Just: • basic coding • y = ax + b • vague memory of dy/dx Drop a 🦀 if you want in.
English
40
13
146
7.6K
DatSciX
DatSciX@DatSciX·
@mcuban Agent vs Agent. MadTV had it right! Yep, they are happening and to make the speed agents need to counter the bad.
English
0
0
1
26
Mark Cuban
Mark Cuban@mcuban·
Have we seen an “Agent” DDOS attack yet ? Isn’t it inevitable ?
English
368
47
906
301.9K
DatSciX
DatSciX@DatSciX·
Flying into Ft. Lauderdale Airport for the boat show. And this is what I saw. Super cool and nice work @USNavy
DatSciX tweet media
English
0
0
0
63
DatSciX
DatSciX@DatSciX·
@AOC Might be the first time ever agreeing but I agree on this topic.
English
0
0
0
3
Alexandria Ocasio-Cortez
This is sad. I know as a politician these companies are going to spend a billion dollars against me for saying it but 🤷🏽‍♀️ Pervasive gambling is not good for society. It turns life into a casino, traps people in addiction & debt, surges domestic violence, and fosters manipulation.
Polymarket@Polymarket

We’re honored to announce MLB has named Polymarket as their Exclusive Prediction Market Exchange Partner. Polymarket 🤝 MLB

English
9K
11.3K
116.9K
10.6M
DatSciX
DatSciX@DatSciX·
@TheAhmadOsman Look at you my friend. Keep up the great work. Hope to work with folks like you on complex and useful projects.
English
0
0
0
16
DatSciX
DatSciX@DatSciX·
@sudoingX How does it compare to openfang? Best to run locally llm or api with something like openrouter?
English
0
0
0
29
DatSciX
DatSciX@DatSciX·
@sudoingX I use Openfang and Hermes Agent. Very powerful when I mix them with other agentic apps.
English
0
0
1
164
Sudo su
Sudo su@sudoingX·
my DMs are full of this. openclaw users hitting walls and looking for something that actually works on their hardware. hermes agent. local GPU. 35-50 tok/s on a 3060. responds in seconds not minutes. 30+ tools that work on small models without special syntax. if you're migrating from openclaw i will personally help you set up hermes. drop your GPU below.
Daniel Sempere Pico@dansemperepico

My OpenClaw is so unbelievably slow now. I mainly use it for information capture, quick voice note yapping to turn into written posts, and food/workout tracking. I just gave it a very short text to edit and it took 4 minutes to reply. Anybody else experiencing this?

English
31
9
191
17K
DatSciX
DatSciX@DatSciX·
@shydev69 LOL. I only run locally and performance is extremely satisfactory. Maybe user error for you?
English
0
0
0
140
shydev
shydev@shydev69·
you've got to be fucking retarded to run LLMs locally for any serious work
English
88
13
898
119.2K
DatSciX
DatSciX@DatSciX·
@UnslothAI WOW! What I need and wondered when you all would build this so its easier for me to train models for my agents!!!
English
0
1
2
662
Unsloth AI
Unsloth AI@UnslothAI·
Introducing Unsloth Studio ✨ A new open-source web UI to train and run LLMs. • Run models locally on Mac, Windows, Linux • Train 500+ models 2x faster with 70% less VRAM • Supports GGUF, vision, audio, embedding models • Auto-create datasets from PDF, CSV, DOCX • Self-healing tool calling and code execution • Compare models side by side + export to GGUF GitHub: github.com/unslothai/unsl… Blog and Guide: unsloth.ai/docs/new/studio Available now on Hugging Face, NVIDIA, Docker and Colab.
English
218
842
5.1K
1.6M
Sudo su
Sudo su@sudoingX·
cancel your chatgpt subscription and delete your openclaw slop. i'm serious. go on ebay and buy a used RTX 3060 for the price of two months of pro. or check your drawer because half of you already own one and forgot about it. install hermes agent from @NousResearch. one framework, 31 tools, file operations, terminal, browser, code execution. connect it to your local llama.cpp server running qwen 3.5 9B Q4. total download is 5.3 gigs. that's it. that's the whole setup. every experiment you hesitated to run on API. every project you shelved because you didn't want your data on someone else's server. every late night idea you didn't test because you hit your rate limit. all of that is gone. runs 24/7 on your electricity. your machine. your data never leaves your house. connect it to telegram if you want it on your phone. hook up whatever tools you need. the model thinks at 29 tok/s with 128K context and it never bills you. qwen 3.5 9B and one RTX 3060 is the setup most people will never try because they've been trained to believe intelligence has to come from a datacenter. it doesn't. it runs on 12 gigs of VRAM under your desk right now. stop giving your thinking away for free.
English
99
191
2.1K
146.5K
Sudo su
Sudo su@sudoingX·
this is what 12 gigs of VRAM built in 2026. a 9 billion parameter model running on a 5 year old RTX 3060 wrote a full space shooter from a single prompt. blank screen on first try. i came back with a bug list and the same model on the same card fixed every issue across 11 files without touching a single line myself. enemies still looked wrong so i pushed another iteration and now the game has pixel art octopi, particle effects, screen shake, projectile physics and a combo system. all running locally on a card that was designed to play fortnite. three iterations. zero cloud. zero API calls. every token generated on hardware sitting under my desk. the model reads its own code, finds what's broken, patches it, validates syntax and restarts the server. i just describe what's wrong and it handles the rest. people are paying monthly subscriptions to type into a browser tab and wait for a server farm to respond. meanwhile a GPU you can find used on ebay is running a full autonomous hermes agent framework with 31 tools, 128K context window and thinking mode generating at 29 tokens per second nonstop. the game still needs work. level upgrades don't trigger and boss fights need tuning. but the fact that i'm iterating on gameplay balance instead of debugging whether the code runs at all tells you where this is headed. every iteration the game gets better on the same hardware. same 12 gigs. same 9 billion parameters. same RTX 3060 from 5 years ago your GPU is not a gaming card anymore. it's a local AI lab that never sends your data anywhere.
Sudo su@sudoingX

i run every model through octopus invaders. same prompt, same game spec if a model can build this autonomously on a single GPU it passes. if it can't it doesn't. qwen 3.5 9B Q4 on a RTX 3060. first attempt was blank screen built 2,699 lines across 11 files and nothing rendered. i wrote it off as a ceiling. then last night i came back with a precise bug list and the same model on the same card fixed every single one surgically. game came to life. enemies spawning, background rendering, collisions working. but bullets didn't fire and the enemies looked like colored squares instead of octopi. today i pushed again. listed 9 more bugs. the agent read every file, patched across 4 modules, validated syntax and restarted the server on its own. bullets fire. enemies look like actual pixel art. screen shake works. the game is playable and i genuinely enjoyed it. level upgrades still don't trigger and there's more to fix but i'm iterating on a single 12GB card running everything locally. every file, every prompt, every output stays on my machine. 29 tok/s generation, 417 tok/s prefill, 128K context window on a card that most people bought to play warzone. if you use AI in any part of your life and you have a computer with a GPU in it you should not be sleeping on this. the model weights are free. the hermes agent framework is free. your data never leaves your house. own your cognition.

English
37
53
678
168.3K
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
Yes, Qwen3.5-9B thinks a bit too much. 22 seconds to reply to Hi. Note: here I'm using oMLX (it seems really good!)
Ivan Fioravanti ᯅ tweet media
English
17
0
60
6.6K
DatSciX
DatSciX@DatSciX·
@ollama @0p53c @openclaw How about openfang? I've been really liking it and using Gemini for now. Prefer local models.
English
0
0
1
53
ollama
ollama@ollama·
This depends on the local compute you have. Hardware will improve. If you have a high performance computer, it runs well. If you don't have enough compute to run a good model, try Ollama's cloud service! It's free to get started. in the terminal: ollama launch openclaw --model kimi-k2.5:cloud ollama.com/blog/openclaw-…
English
3
0
17
937