Southern Syndicate

8.5K posts

Southern Syndicate

Southern Syndicate

@wayne1767

“The righteous dream of justice. Evil dreams of leverage. One brings a sword. The other brings a hostage.”

شامل ہوئے Şubat 2021
514 فالونگ102 فالوورز
Southern Syndicate
Southern Syndicate@wayne1767·
@slmpkns55 @leahfiles All you needed to see was the lefts reaction when Matt Gaetz was nominated for AG, then compare it to how they reacted with Pam Bondi… crickets. Straight to confirmation. That wasn’t coincidence.
English
0
0
1
14
Scott Riener
Scott Riener@slmpkns55·
Why is it always after the fact that we find out this information? Not saying that you didn’t inform people because I am not sure tbh. But all the dirt always seems to come out after the fact of the damages done to the American people. Thank you for your hard work and information.
English
3
0
7
384
theleahfiles
theleahfiles@leahfiles·
Fun fact: Ballard Partners, her old lobbying firm, went from $19M a year to $82M+ in 2025 because Ms Bondi signed 14 memos on day 1, including getting rid of the FARA requirement. They also represented all 3 parties in the Warner Bros “deal”, including both competitors and the seller. Oh, and conveniently represented many of the Epstein parties (Apollo group, JP Morgan, etc) for 3 months before DOJ said there was nothing to see here. They were first time clients…and for 3 months convenient right? Bye Bondi. You have failed our country in so many ways. That 2 part follow the money is on my Substack!
The General@GeneralMCNews

BREAKING: President Donald Trump has informed Attorney General Pam Bondi that “her time as Attorney General is coming to an end.”

English
57
524
1.6K
49.6K
Southern Syndicate
Southern Syndicate@wayne1767·
@MattWalshBlog She never should have been nominated. I knew she was the wrong candidate because the left didn’t act hysterical when she was chosen. They lost their shit over Gaetz. She had no resistance.
English
0
0
0
460
Matt Walsh
Matt Walsh@MattWalshBlog·
Bondi should have been fired the day after the Epstein binder fiasco, which was one of the dumbest, most gratuitous and wholly unnecessary stunts I’ve ever seen in politics. I’m glad she’s gone now. We need someone who will actually do things. Not pretend to do things.
English
1K
2.6K
38.7K
882K
Southern Syndicate ری ٹویٹ کیا
elvis
elvis@omarsar0·
Building a personal knowledge base for my agents is increasingly where I spend my time these days. Like @karpathy, I also use Obsidian for my MD vaults. What's different in my approach is that I curate research papers on a daily basis and have actually tuned a Skill for months to find high-signal, relevant papers. I was reviewing and curating papers manually for some time, but now it's all automated as it has gotten so good at capturing what I consider the best of the best. There are so many papers these days, so this is a big deal. You all get to benefit from that with the papers I feature in my timeline and on @dair_ai. The papers are indexed using @tobi qmd cli tool (all of it in markdown files along with useful metadata). So good for semantic search and surfacing insights, unlike anything out there. I am a visual person, so I then started to experiment with how to leverage this personal knowledge base of research papers inside my new interactive artifact generator (mcp tools inside my agent orchestrator system). The result is what you see in the clip. 100s of papers with all sorts of insights visualized. I keep track of research papers daily, so believe me when I tell you that this system is absolutely insane at surfacing insights. This is the result of months of tinkering on how to index research and leverage agent automations for wikification and robust documentation. But this is just the beginning. The visual artifact (which is interactive too) can be changed dynamically as I please. I can prompt my agent to throw any data at it. I can add different views to the data. Different interactions. I feel like this is the most personalized research system I have ever built and used, and it's not even close. The knowledge that the agents are able to surface from this basic setup is already extremely useful as I experiment with new agentic engineering concepts. I feel like this knowledge layer and the higher-level ones I am working on will allow me to maximize other automation tools like autoresearch. The research is only as good as the research questions. And the research questions are only as good as the insights the agents have access to. Where I am spending time now is on how to make this more actionable. I am obsessed about the search problem here. The automations, autoresearch, ralph research loop (I built one months ago) are easier to build but are only as good as what you feed them. Work in progress. More updates soon. Back to building.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
124
412
4.2K
401.9K
Southern Syndicate ری ٹویٹ کیا
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2K
4.7K
41.6K
11.3M
Om Patel
Om Patel@om_patel5·
saying "hello" to Claude on the Pro plan now costs 2% of your entire session usage one message. "hello, how are you?" that's it. this is why people are mass migrating to Codex right now because its literally impossible to reach limits anthropic needs to fix this before they lose the crazy amount of developers they just gained
English
455
385
6.8K
1.1M
All day Astronomy
All day Astronomy@forallcurious·
🚨: At the quantum level, reality is described not by solid objects, but by probability, energy fields, and interaction.
English
105
658
5K
485.3K
Southern Syndicate ری ٹویٹ کیا
Gustavo Fernández
Gustavo Fernández@Arquisteel1·
Hoy fue mi primer día en el gimnasio 💪😂
Español
192
2.1K
15.5K
631.1K
David Gaw
David Gaw@davidgaw·
@MattDeLuca @Savsays Actually, there are multiple issues, including that its doc requirements likely disenfranchise eligible voters (maybe mostly GOP), that it has constitutional problems as a result, and that it undermines federalism by interfering with state control of elections to no good end.
English
29
2
28
1.2K
Savanah Hernandez
Savanah Hernandez@Savsays·
It’s cannot be understated how unfathomably cooked we are if we can’t even get basic voter ID laws passed in this country.
English
1.2K
2.7K
21.4K
85.3M
Southern Syndicate
Southern Syndicate@wayne1767·
@bigsmokeytrader @JackPosobiec You remember George Zinn? That dude also confessed to the same murder. Crazy, eh? He confessed so it must be true. Two shooters. No need to question it, because he confessed.
English
0
0
5
37
Jack Posobiec
Jack Posobiec@JackPosobiec·
Even I have questioned the narrative that no one else was involved but the evidence we have seen points to Tyler Robinson I get why people would want to ask questions, especially after seeing so many lies through 2020, Covid, and Epstein
English
176
41
512
77.4K
Southern Syndicate
Southern Syndicate@wayne1767·
@elonmusk Did someone explain to him that it was racist to require identification?
English
0
0
0
6
transtifa
transtifa@transtifa80594·
@paleochristcon @TheOmniLiberal talks tough when he knows damn well destiny is out of town. he might still hope on a laptop just to make you look retarded but lmao.
English
18
0
77
9.9K
Southern Syndicate
Southern Syndicate@wayne1767·
Lmao. Stfu. He didn’t start to look like a bad actor, and an interview with this greedy war whore wasn’t a priority. All that balding twat is going to do is try to convince him he should have resigned in a way that doesn’t portray Israel as a bad actor and / or insinuate that he’s antisemitic.
English
0
0
0
51
Mark R. Levin
Mark R. Levin@marklevinshow·
These guys are pathetic. Joe Kent, my producer cannot find your number. He’s contacted you several times via the Internet. I’d like to interview you on radio about several things.  You don’t get to tell me where and how to conduct an interview.  We don’t have your number. So I’ll take your obfuscations as a no.
English
1.6K
335
3K
359.5K
Cernovich
Cernovich@Cernovich·
I know when people are using AI. Suddenly there are BLOCKS of paragraphs. But no posts before like that. This is an immediate unfollow and mute from me. I take it as an insult. You won't even think for yourself, write for yourself, why should I bother with such slop.
English
101
81
1.7K
78.2K
Cernovich
Cernovich@Cernovich·
Ah. Now I see why the slop is increasing. X has a slop recipe feature. This helps the lowest IQ and least creative serve up more slop.
Cernovich tweet mediaCernovich tweet media
English
18
27
305
24.7K
MAZE
MAZE@mazemoore·
Hey everyone, she posted the “proof” that Erika Kirk has been r*ping and torturing kids. It’s all right here in this unreadable spreadsheet. 😜
English
85
126
1.3K
55.9K
stevenmarkryan
stevenmarkryan@stevenmarkryan·
@elonmusk People will revisit this post up in the future and say "If only we understood how BIG this would be".
English
7
5
82
4.6K
Elon Musk
Elon Musk@elonmusk·
Macrohard or Digital Optimus is a joint xAI-Tesla project, coming as part of Tesla’s investment agreement with xAI. Grok is the master conductor/navigator with deep understanding of the world to direct digital Optimus, which is processing and actioning the past 5 secs of real-time computer screen video and keyboard/mouse actions. Grok is like a much more advanced and sophisticated version of turn-by-turn navigation software. You can think of it as Digital Optimus AI being System 1 (instinctive part of the mind) and Grok being System 2. (thinking part of the mind). This will run very competitively on the super low cost Tesla AI4 ($650) paired with relatively frugal use of the much more expensive xAI Nvidia hardware. And it will be the only real-time smart AI system. This is a big deal. In principle, it is capable of emulating the function of entire companies. That is why the program is called MACROHARD, a funny reference to Microsoft. No other company can yet do this.
English
8.3K
11.4K
79.4K
47.6M
Ian Carroll
Ian Carroll@IanCarrollShow·
The proposed Erika Kirk audio going around has been debunked. And I was definitely not wrong that it would be wild to watch. The winner once again was @RealCandaceO
English
1.3K
1.5K
12.9K
1M