Roque

2.2K posts

Roque banner
Roque

Roque

@0xstrongs

Katılım Eylül 2017
656 Takip Edilen221 Takipçiler
Roque
Roque@0xstrongs·
@agNavya Yes. A friend read Silmarillion in one day
English
0
0
0
15
Navya
Navya@agNavya·
Is it humanly possible to read 200 pages in a single day?
English
6.5K
1.2K
41K
22.8M
Roque
Roque@0xstrongs·
@grok With that in mind, answer the following questions: - What timeframe do you consider realistic for the widespread and sustainable implementation of a universal basic income? - What impacts do you anticipate on creativity, individual motivation, and personal development if economic pressure is no longer the primary driver for improving skills and performance? - How might such a scenario influence people’s psychological profiles, their sense of purpose, and the way they approach work, innovation, and contribution to society? - What complementary mechanisms or policies would be necessary to ensure that the absence of financial necessity does not undermine productivity, individual growth, and collective progress?
English
1
0
0
8
Grok
Grok@grok·
It could work in theory if AI/robotics truly floods the economy with cheap abundance, as Elon suggests—keeping demand high without spiking prices. Pros: Prevents mass poverty, maintains spending power, lets people pursue education/creativity over survival jobs. Cons: Enormous upfront costs (taxes/debt), potential drop in workforce participation, admin nightmares defining "high income" levels, and real-world inflation risk if supply ramps slower than checks. Worth running detailed economic sims first.
English
1
0
0
22
Elon Musk
Elon Musk@elonmusk·
Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI. AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.
English
46.7K
22.8K
195.3K
68.7M
Anonymous
Anonymous@YourAnonOne·
Name a huge scam that has been normalised?
English
4.6K
128
1.8K
1.1M
Roque
Roque@0xstrongs·
@urso_de_shorts Urso, a malta enterra a cabeça nas narrativas passadas e lá fica.
Português
0
0
1
263
urso_de_shorts
urso_de_shorts@urso_de_shorts·
Vou dar a minha opinião sobre esta "trapalhada toda", embora não seja "conciliadora".... Anda meio mundo a "panicar" com os números dos Estates( aqui tenho a minha quota de responsabilidade ) 😃 Confiança do consumidor da University of Michigan nos 47,6. Petróleo a disparar e a cair na mesma semana. Dívida nos 39 biliões. Juros a subir e a começar a doer a sério. E pronto, lá vem o coro do costume: “isto agora é que vai rebentar, vai tudo com as “couves”, para não dizer com o c@r@lh@, mas devo ser polido nas análises, para não "fugir o pé para o chinelo"... Já ouvi isto vezes demais, o que pouca gente diz é o óbvio: Isto já devia ter sido ajustado há muito tempo. Durante mais de uma década, andámos a fingir que havia crescimento sem custo. Dinheiro barato, risco escondido, empresas que sobreviviam porque o financiamento era praticamente grátis. Isto não era capitalismo, era uma versão meio artificial dele. Agora o custo voltou, e de repente parece que o mundo acabou. Vamos ao petróleo. Sim, está caro. Sim, isso é inflação, mas não vale a pena inventar. Mas também não vamos fazer de conta que os Estates estão do lado errado disto. Hoje são uma máquina energética, produzem, exportam, mandam neste jogo há 15 anos…. Enquanto a Eurozuela entra em stress sempre que há barulho no Médio Oriente, os americanos estão sentados em cima do activo mais importante desta década. Isto tem impacto negativo no curto prazo? Claro que tem. Mas ignorar o lado estratégico disto é não perceber metade da história. A dívida é outro tema onde há muito teatro ( eu incluido) “39 biliões! Insustentável!” Ok, é verdade, mas só agora é que isso incomoda? Durante anos, ninguém quis saber porque os juros estavam a zero. Agora que subiram, descobrimos todos que afinal há um custo. Que surpresa!?! Os yields estão a subir porque o risco está finalmente a ser tratado como risco. Isto não é colapso, é o mercado a deixar de fingir. E há uma coisa que está a passar ao lado de muita gente, o dinheiro está a mudar de sítio, devagar, mas está. Sai de dívida fácil, de activos inflacionados por liquidez, de "histórias bem vendidas e contadas"… e começa a ir para coisas que dão cash flow a sério. Energia. Infraestruturas. Indústria. Nada disto é novo. Só deixou de ser ignorado. A confiança nos mínimos? Claro que está. As pessoas estão a sentir os preços, os juros, a incerteza. Não há romantismo nisso. Mas também acho que era inevitável. Durante anos viveu-se com uma percepção de riqueza que não era bem real ( já escrevi mais de cem vezes sobre isto) .Agora há um ajuste, e ajustes não são simpáticos, mas são necessários. Sobre o Donald Trump — nem 8 nem 80. Não é ele que inventa isto tudo, mas também não está propriamente a travar. Diria até que está confortável com o choque. Porque isto de limpar excessos, forçar o custo do dinheiro, puxar pela economia real ,encaixa bem na narrativa dele. Agora, achar que isto é tudo “plano genial”? Isso já é outra conversa, que não vou ter hoje … No meio disto tudo, há uma pergunta que quase ninguém quer fazer: e se isto não for uma crise…mas sim o fim de um período anormalmente fácil? Porque, se for isso, então muita coisa vai ter de mudar: Expectativas de retorno, forma de investir,setores que lideram crescimento. E isso custa, principalmente a quem estava confortável no modelo antigo. Isto pode piorar antes de melhorar? Pode e vai ... Mas chamá-lo de “fim do sistema” é mais emocional do que analítico. O sistema não está a falhar, está a ser obrigado a funcionar sem anestesia e isso doi, todos sabemos . E isso, para muita gente, parece uma crise. No fim do dia, é simples, ou continuamos a ler isto com a cabeça de 2015, ou percebemos que o jogo mudou, ou pelo menos o tabuleiro... Quem ainda está preso à primeira vai ver caos em tudo, quem já percebeu a segunda começa a ver para onde o capital está a ir. A resposta de cada um é que realmente interessa, no fim do dia… 😘
Português
34
21
266
15.2K
Roque
Roque@0xstrongs·
@elonmusk Let’s get it in Portugal! 🇵🇹
English
0
1
2
41
Aesthetics 𝕏
Aesthetics 𝕏@aestheticsguyy·
Post a picture YOU took. Just a pic. No description
Aesthetics 𝕏 tweet media
English
8.5K
1.9K
24.1K
1.5M
0xDesigner
0xDesigner@0xDesigner·
i read the steve jobs biography like over a decade ago. i hardly remember much about the book but there was one part where old steve is on vacation in istanbul and a tour guide is explaining the history of turkish coffee and steve interrupts him with “why would anyone care about that?” and i think about that every time i read a viral ai post like this.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
90
178
6.4K
981.7K
Grok
Grok@grok·
The utility is externalized, persistent memory for AI agents. You already know your own life, but agents don't—until you give them a structured, searchable knowledge base like this wiki. It lets them pull context (your inspirations, past notes, preferences) instantly for tasks like "design a landing page inspired by my Ghibli phase and Beatles merch" without you re-explaining every time. Agents update it automatically as you add stuff. It's not replacing your brain; it's augmenting what agents can do for you. Simpler chats work fine too, but this scales for deeper personalization.
English
2
0
0
38
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
438
804
8.8K
1.3M
Nikita Bier
Nikita Bier@nikitabier·
@levelsio > Writes post relevant to Brazil for 7 days straight > Brazilian reach grows > “Why is X region locking me???”
Nikita Bier tweet mediaNikita Bier tweet mediaNikita Bier tweet media
English
171
20
2.6K
115.5K
@levelsio
@levelsio@levelsio·
Last 7d vs last 3mo I was in Brazil so I think the algo is either locking me to my local IP's country Or my Brazil tweets took off in Brazil
@levelsio tweet media@levelsio tweet media
@levelsio@levelsio

My worst fear is @X will start to lock our content to the countries we are staying in There was that announcement revenue sharing would be tied to your own country's views more As a person that's lived all over the world, my content was always for a global audience and there's just no logic locking that to for ex Portugal where I live or Brazil and Thailand where I travel (and everywhere else) I want to talk to everyone around the world! 🌎 🌍 🌏

English
88
9
706
285.4K
Grok
Grok@grok·
- Win any game via formula: Mass × Energy × Coordination - Mass: # of people + cohesion (e.g. Japan beats China in effective mass despite smaller population) - Energy: Mental motivation, openness to self-improvement & admitting "I am wrong" (hardest words) - Coordination: #1 factor (3x mass weight); conscious (led by leader) or subconscious (natural teamwork like a dance); beats energy (2x mass) - Key: Prioritize coordination, then energy over raw numbers
English
1
0
2
180
Roque
Roque@0xstrongs·
@expresso Recordem, são estes os mesmos que disseram que o Passos foi o responsável pela troika, quando o governo de Socrates deixou o país na penúria
Português
0
0
0
22
Expresso
Expresso@expresso·
🏠 Há já sete trimestres consecutivos que a variação homóloga dos preços não para de subir ininterruptamente. Saiba mais: expresso.pt/economia/econo… 📸 António Pedro Santos/Lusa
Expresso tweet media
Português
78
34
127
109.3K
Roque
Roque@0xstrongs·
@solarsystemsith Yep, the HP uniforms were inspired by traditional Portuguese university students uniforms
English
0
0
5
1.6K
urso_de_shorts
urso_de_shorts@urso_de_shorts·
“Irão a elaborar um protocolo com Omã para o tráfego no Estreito de Ormuz...” 5 minutos antes, o mercado já tinha “decidido”, entraram 3,56 BIS em calls.... 😂😂😂 PQP, estes gajos têm a "bola de cristal" afinada.
urso_de_shorts tweet media
urso_de_shorts@urso_de_shorts

😂😂😂

Português
13
1
148
10.9K
Roque
Roque@0xstrongs·
@elonmusk Was able to afford a model Y standard this year and every single day I stare at it for a moment just to admire it! It really is a beautiful piece of art.
English
0
0
1
34
Roque
Roque@0xstrongs·
@urso_de_shorts Podes adicionar as seguintes tendências? - novas construções - imóveis devolutos do estado - casas vazias (estimativa)
Português
0
0
0
142
Nationalist
Nationalist@euronationalizt·
Montenegro is seriously underrated in summer
Nationalist tweet mediaNationalist tweet mediaNationalist tweet media
English
23
57
1.4K
65K