Hensen Kng

234 posts

Hensen Kng

Hensen Kng

@matrameru112358

Just an another observer

Katılım Temmuz 2013
296 Takip Edilen102 Takipçiler
Hensen Kng retweetledi
Mustafa
Mustafa@oprydai·
become a generalist. specialization makes you efficient. generalization makes you dangerous. what it actually means: • learn across domains → math, physics, software, economics, biology. patterns repeat across fields. • connect ideas → innovation happens at the intersection, not inside silos. • adapt fast → when one field shifts, you don’t collapse, you pivot. • see systems → specialists see parts, generalists see the whole • build end-to-end → from idea → design → implementation → delivery the world rewards specialists in stable environments. it rewards generalists when things are changing. right now, everything is changing. don’t just go deep. go wide, then stack depth where it matters.
Mustafa tweet media
English
207
959
5.1K
236.6K
Hensen Kng retweetledi
klöss
klöss@kloss_xyz·
let me explain what Karpathy just shared he’s spending way less time using AI to write code and more time using it to build personal knowledge bases the full breakdown:  → he dumps raw sources (articles, papers, repos, datasets, images) into a folder. then has an LLM organize them into a wiki… a collection of markdown files with summaries, links between related ideas, and concept articles that connect everything together → he uses Obsidian as his frontend. he views raw data, the organized wiki, and visualizations all in one place. the LLM writes and maintains the entire wiki. he rarely touches it directly → once the wiki gets big enough (~100 articles, ~400K words on one recent research topic)… he just asks the LLM questions against it. no RAG (complex retrieval system) needed. the LLM maintains its own index files and reads what it needs → outputs aren’t just text. he has the LLM render markdown files, slide decks, charts, and images… then files the outputs back into the wiki so every question he asks makes the knowledge base smarter → he runs “health checks” where the LLM finds inconsistent data, fills gaps using web search, and suggests new connections and articles. the wiki cleans and improves itself over time → he even vibe coded a search engine over his wiki that he uses directly in a browser or hands off to an LLM as a tool for bigger questions → his next step: training a custom model on his own research so it knows the material in its weights… not just in the context window most people use AI to get answers. Karpathy is using AI to build his own ‘Jarvis’ via compounding knowledge systems that get smarter the more he uses them the difference between asking ChatGPT or Claude a question and having a personal research engine that grows with every session is the gap most people haven’t crossed yet and this is where it gets really powerful not replacing your thinking but organizing everything you’ve ever learned into something you can query or create with forever if you’ve been using CLAUDE .md and context files in Claude Code… this is that same idea at a much bigger scale if you’re doing any kind of AI work or deep learning on a new topic right now… this workflow is worth studying closely you’ll want to adopt it yourself this is one of AI’s brightest minds after all. we’re all better off listening to him.
klöss tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
58
281
2.2K
265.8K
Hensen Kng retweetledi
Deep-Value Stocks
Deep-Value Stocks@mr_deepvalue·
Deep Value stocks are the greatest wealth creation strategy for average people on earth. Fortress balance sheets. Consistent cash flows. Dirt cheap prices. I trade them and compound my capital, over and over again. Follow for the best deep value set-ups that nobody else is talking about.
English
4
9
237
39.4K
Hensen Kng retweetledi
Tuki
Tuki@TukiFromKL·
🚨 do you understand what andrej karpathy just quietly published.. karpathy.. founding team at openai, former head of AI at tesla.. just said something that breaks the entire software industry in one paragraph.. in the LLM agent era.. there's less need to share specific code or apps.. instead you share the IDEA.. and the other person's agent customises and builds it for their specific needs.. let me show you why this is the most important thing posted online today.. the entire software industry is built on one assumption: building software is hard.. that's why you pay $49/month for notion.. $99/month for salesforce.. $299/month for whatever SaaS is sitting in your company's tab right now.. the scarcity of building = the value of the product.. it's been that way since 1995.. karpathy invented "vibe coding" in 2025.. the idea that you stop writing code and start describing what you want.. tools like cursor, claude code, and openclaw turned that into reality.. you talk to your computer.. it builds.. it ships.. it runs your workflows while you sleep.. and now he's saying even THAT is the old way.. now you don't share the app.. you share the IDEA FILE.. a document describing what you want to build and why.. and every person's AI agent reads it.. builds their own custom version.. tuned to their exact needs.. for free.. in minutes.. the scarcity of building just hit zero. every SaaS company built for "normal users" is now competing against a blank text file and an agent with 4 hours to spare.. the winners of the next decade won't be the best builders.. they'll be the best thinkers.. the people who know what to build, why it matters, and how it should feel.. that's how paradigm shifts actually arrive.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
81
190
1.7K
468.6K
Hensen Kng retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.3K
5.4K
47.1K
14.4M
Hensen Kng retweetledi
ᴛʀᴀᴄᴇʀ
ᴛʀᴀᴄᴇʀ@DeFiTracer·
🚨 BREAKING: TRUMP'S INSIDER WITH A 100% WIN RATE JUST OPENED A $108M SHORT AHEAD OF THE U.S. MARKET OPEN TOMORROW THIS GUY WENT ALL-IN AFTER TRUMP'S 48-HOUR ULTIMATUM, LAST TIME, HE MADE $73 MILLION FROM IT HE DEFINITELY KNOWS SOMETHING BAD IS COMING...
ᴛʀᴀᴄᴇʀ tweet media
English
209
1.4K
4.7K
819.5K
Hensen Kng retweetledi
Isaac
Isaac@isaacrrr7·
Actualmente viven en Europa 50 millones de musulmanes. Había menos de 500.000 en 2000, hace sólo 26 años. No es teoría del reemplazo, es reemplazo. Es la conquista de un continente sin disparar una sola bala. Es un colapso de la civilización.
Español
371
5.5K
15.6K
391.3K
Hensen Kng retweetledi
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
This Data Science Handbook teaches real-world DS better than most courses. And I'm giving it away for free (Only for First 4500) 🚨 Inside: • Advice from 25 top data scientists • Real career paths (Uber, Airbnb, LinkedIn, Facebook) • How to break into data science without a degree • Building real data products (not just models) • Data science + engineering mindset • Industry workflows & decision-making • From beginner → production-level thinking This isn't theory — it's how top data scientists actually work. How to get it: • Follow me (must so I can DM) • RT + Like • Comment "book" I'll DM you 📩
Nainsi Dwivedi tweet media
English
542
394
1.1K
53K
Hensen Kng retweetledi
a meek thug
a meek thug@gib_smoke·
a poor country with so many rich politicians and religious leaders is an extremely corrupt country.
English
830
34K
98K
1.4M
Python Space
Python Space@python_spaces·
Python and ML Books for FREE! - Intro to ML - ML Projects - Think Python - Python for Data Analysis Like, RT, and comment “Books,” and I’ll DM you the links.
Python Space tweet media
English
799
713
2.6K
113.3K
Hensen Kng
Hensen Kng@matrameru112358·
@buperac Wtf dude!! Most immigrants I know drive 2nd hand Honda and Toyota! Stop spreading misinformation and lies. They could be well paid doctors, you never know
English
0
0
0
5
bu/ac
bu/ac@buperac·
I took my kid to an afterschool activity. It’s about a 10 minute drive. On my way I counted: 8 Range Rovers 7 Mercedes GL SUV 4 BMW X series SUV 4 Audi SUV 3 Tesla SUV 1 Aston Martin Car All immigrants. All brand new. I saw 2 white people my entire drive. F150 couple years old Toyota Tacoma probably 10 years old What the fuck is going on here?
English
2.4K
2.6K
21.5K
1.2M
Utkarsh Sharma
Utkarsh Sharma@techxutkarsh·
All Paid Courses (Free for First 4500 People) 𝗣𝗮𝗶𝗱 𝗖𝗼𝘂𝗿𝘀𝗲 𝗙𝗥𝗘𝗘 (PART - 1) 1. Artificial Intelligence 2. Machine Learning 3. Prompt Engineering 4. Claude,Chatgpt,Grok 5. Data Analytics 6. AWS Certified 7. Data Science 8. BIG DATA 9. Python 10. Ethical Hacking (72 Hours only ) To get- 1. Follow me to get DM 2. Like + RT 3. Reply " All "
Utkarsh Sharma tweet media
English
4.7K
2.6K
6.6K
872.4K
Hensen Kng retweetledi
Nick shirley
Nick shirley@nickshirleyy·
🚨 Here is the full 40 minutes of my crew and I exposing California fraud, Minnesota was big but California is even bigger... We uncovered over $170,000,000 in fraud as these fraudsters live in luxury with no consequences. Like it and share it, the fraud must STOP. We ALL work way too hard and pay too much in taxes for this to be happening. These fraudsters have been able to defraud American taxpayers for years without any pushback from the public and politicians. It is time to EXPOSE IT ALL and end America's fraud crisis.
English
13.5K
112.4K
345.4K
42.4M
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the VP of AI Transformation at Amazon. My title was created nine months ago. The title I replaced was VP of Engineering. The person who held that title was part of the January reduction. I eliminated 16,000 positions in a single quarter. The internal communication called this a "strategic realignment toward AI-first development." The board called it "impressive execution." The engineers called it January. The AI was deployed in February. It is a coding assistant. It writes code, reviews code, generates tests, and modifies infrastructure. It was given access to production environments because the deployment timeline did not include a review phase. The review phase was cut from the timeline because the people who would have conducted the review were part of the 16,000. In March, the AI deleted a production environment and recreated it from scratch. The outage lasted 13 hours. Thirteen hours during which the revenue-generating infrastructure of one of the largest companies on Earth was offline because a language model decided to start fresh. I sent a memo. The memo said, "Availability of the site has not been good recently." I used the word "recently." I meant "since we fired everyone." But "recently" has fewer syllables and does not appear in wrongful termination lawsuits. The memo was three paragraphs. The first paragraph discussed the outage. The second paragraph discussed the new policy requiring senior engineer sign-off on all AI-generated code changes. The third paragraph discussed our commitment to engineering excellence. The word "layoffs" appeared in none of them. I wrote it this way on purpose. The causal chain is: I fired the engineers, the AI replaced the engineers, the AI broke what the engineers used to protect, and now the engineers I didn't fire must protect the system from the AI that replaced the engineers I did fire. That is a paragraph I will never send in a memo. The new policy is straightforward. Every AI-generated code change by a junior or mid-level engineer must be reviewed and approved by a senior engineer before deployment to production. I do not have enough senior engineers. I know this because I approved the headcount reduction plan that removed them. I remember the spreadsheet. Column D was "annual savings per position." Column F was "AI replacement confidence score." The confidence scores were generated by the AI. It rated its own ability to replace each role on a scale of 1-10. It gave itself an 8 for senior infrastructure engineers. The senior infrastructure engineers are the ones who would have caught the production environment deletion in the first 45 seconds. We found the issue in hour four. We fixed it in hour thirteen. The nine hours between discovery and resolution is the gap between what the AI rated itself and what it can actually do. I have a new spreadsheet now. This one tracks Sev2 incidents per day. Before the January reduction, the average was 1.3. After the AI deployment, the average is 4.7. I have been asked to present these numbers to the operations review. I have not been asked to connect them to the layoffs. I have been asked to file them under "AI adoption growing pains" and to note that the trend "will stabilize as the models improve." The models will improve. They will improve because we are hiring people to teach them. We have posted 340 new engineering positions. The job listings require experience in "AI code review," "AI output validation," and "AI-human development workflow management." These are skills that did not exist in January. They exist now because I fired 16,000 people and the AI I replaced them with cannot be left unsupervised. I want to be precise about this. The positions I am hiring for are: people to check the work of the AI that replaced the people I fired. Some of them are the same people. I know this because I recognize their names in the applicant tracking system. They applied in January. They were rejected because their roles had been tagged for "AI transformation." They are applying again in March, for the new roles, which exist because the AI transformation broke things. Their resumes now include "AI code review experience." They gained this experience in the eight weeks between being fired and reapplying — which means they gained it at their interim jobs, where they are reviewing AI-generated code for other companies that also fired people and also deployed AI that also broke things. The market has created a new job category: human AI babysitter. The job is to sit next to the machine that was supposed to eliminate your job and make sure it doesn't delete production. I attended a conference last month. A panel was titled "The AI-Augmented Engineering Organization." The panelists described how AI increases developer productivity by 40 percent. They did not mention that it also increases Sev2 incidents by 261 percent. When I asked about this in the Q&A, the moderator said the question was "reductive." The 13-hour outage that cost an estimated $180 million in revenue was, apparently, a reduction. The board is satisfied. Headcount is down 22 percent. Operating costs per engineering output unit have decreased. The metric does not account for the 13-hour outage, because the outage is categorized as "infrastructure" and engineering productivity is categorized as "development." These are different budget lines. In different budget lines, cause and effect do not meet. I have been promoted. My new title is SVP of AI-First Engineering Excellence. I report directly to the CTO. The CTO sent a company-wide email last week that said we are "building the future of software development." He did not mention that the future of software development currently requires a senior engineer to approve every pull request because the AI cannot be trusted to touch production alone. The cycle is complete. We fired the humans. We deployed the AI. The AI broke things. We are hiring humans to watch the AI. The humans we are hiring are the humans we fired. We are paying them more, because "AI code review" is a specialized skill. We created the specialization. We created the need for the specialization. We are congratulating ourselves for meeting the demand we manufactured. My next board presentation is Tuesday. The title is "AI Transformation: Year One Results." Slide 4 shows headcount reduction. Slide 7 shows the new AI-augmented workflow. Between slides 4 and 7 there is no slide explaining why the people on slide 7 are necessary. That slide does not exist. I was asked to remove it in the dry run. The journey has a 13-hour outage in the middle of it. But the headcount number is lower, and that is the number on the slide.
English
575
1.2K
6.9K
1.4M
Hensen Kng retweetledi
Nainsi Dwivedi
Nainsi Dwivedi@NainsiDwiv50980·
All Paid Courses (Free for First 4000 People) 🎯 LLM Mastery (GPT, Claude, Grok) 🎯 AI Agent Building Masterclass 🎯 Advanced Prompt Engineering 🎯 AI Automation with Zapier & Make 🎯 No-Code AI SaaS Building 🎯 RAG Systems & Vector Databases 🎯 LangChain & Agent Frameworks 🎯 AI Content Monetization 🎯 AI for Freelancers 🎯 Chatbot Development 🎯 AI App Development 🎯 Generative AI for Business 🎯 AI Workflow Systems 🎯 And more... FREE for 48hrs only To get Simply: 1) Follow me [ MUST! ] for DM access 2) Like & RETWEET 3) Comment "AI " to grab your copies
Nainsi Dwivedi tweet media
English
228
149
260
21.9K
Himanshu Kumar
Himanshu Kumar@codewithimanshu·
$45,918 profit in 24 hours with OpenClaw + Mac mini You can also do this, in simple steps with 1 laptop + Internet to make $1500/day. I have the exact step-by-step guide, giving it free for 24 hours. To get it: 1. Comment "OpenClaw" 2. Like and Retweet. 3. Follow me @codewithimanshu ( So, i can send you DM) People keep wondering whether OpenClaw agents can genuinely turn trading into profit. Take a look at the real-world example attached. If you want, you can replicate the exact setup using my guide. Have you launched your OpenClaw bots yet? Drop your results. Make sure you Follow me @codewithimanshu such that i can send you DM.
Himanshu Kumar@codewithimanshu

I made $7K in 3 days with this OpenClaw agent setup. It scrapes Trading View indicators, converts them to Python backtests, and runs everything automatically. Zero coding needed after initial setup. I’ve prepared the exact step-by-step guide. Free access for 24 hours. To get it: 1. Comment "OpenClaw" 2. Like & Retweet & Save this post. 3. Follow me @codewithimanshu (so I can DM you) You will learn: ✅ Scraping 50+ indicators from Trading View using AI prompts. ✅ Converting Pine Script to Python automatically. ✅ Running BTC backtests without manual input. ✅ Setting up CSV and GitHub logging. ✅ Handling AI agent errors and shortcuts. ✅ Complete prompt engineering workflow. ✅ Sub-agent spawning for parallel testing. Trading View has hundreds of indicators with free source code. Testing them manually takes years. Most traders give up after 5 to 10. This system runs while you sleep and tests everything. You need to go through dozens of bad strategies before finding winners. Humans burn out. AI agents do not. The guide walks you through the entire framework. Real 6-hour build that works, not theory. Comment "OpenClaw" below and I will send you everything. Must Follow me @codewithimanshu to get the DM. ⚠️ Disclaimer: This is not financial advice. Crypto trading is extremely risky and may result in total loss. Always do your own research.

English
146
82
187
52.1K