Ashish verma

494 posts

Ashish verma banner
Ashish verma

Ashish verma

@A_coder_one

I know one day I will die...

Gola Gokaranath Kheri Bergabung Ekim 2020
234 Mengikuti39 Pengikut
Tweet Disematkan
Ashish verma
Ashish verma@A_coder_one·
I think when you go deep into internals then you learn about systems that how it works. I am making HTTP server. And learned about many things from Blocking I/O to non-Blocking I/O. Next- ✅ Will work on sending static files on server. #BuildInPublic #100DaysOfCode
Ashish verma tweet media
English
0
0
0
51
Ashish verma
Ashish verma@A_coder_one·
@arpit_bhayani code review with time zone thing is a big problem, it decrease the pace of the work. I have gone through this.
English
0
0
0
349
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
If someone is waiting on you for a code review, that has to be your P0 task. Look, waiting on a code review is one of the most frustrating things because the person is literally blocked on you. It gets even worse when there is a time zone difference to deal with. I get it. You do not have malicious intent and are genuinely busy with something important. But still, I would say it is a prioritization problem. Most people treat code review as something they get to when they have a quiet moment, and that quiet moment rarely comes. Coming from my personal pain point, I would say treat code review as a high-priority task, not a background one. If your teammate has raised a PR and you are the reviewer, that is important work in progress sitting idle. Every hour it waits is an hour of 'blocked momentum' (yeah, fancy term). Also, it is okay to preempt your non-urgent work for it. This matters even more across time zones because a delayed review does not cost an hour; it costs an entire working day for the other person (ohhh, this used to be such a pain). So the next time you see a PR sitting there, wrap up the review, because it is not "someone else's work"; it is yours. Hope this helps.
English
40
34
688
40.4K
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the CEO of Palantir Technologies. The company is worth a quarter of a trillion dollars. I did not misspeak. Two hundred and forty-nine billion. The stock is up 320% in the past 12 months. The product is surveillance. I do not use that word at conferences. At conferences, I say "data integration," "operational intelligence," or "decision advantage." These mean the same thing. Surveillance is the honest version. I save the honest version for rooms where honesty is a competitive advantage. I gave a speech on March 3 at the Andreessen Horowitz American Dynamism Summit. "American Dynamism" is the fund's label for military technology. The name makes it sound like a fitness supplement. The fund's thesis is that defending the nation is a market opportunity. I agree with the thesis. The thesis made me a billionaire. Agreement is the product. I sell it at scale. Here is what I said, verbatim, to a room of six hundred people whose combined net worth exceeds the GDP of Portugal: "If Silicon Valley believes we are going to take away everyone's white-collar job and you're gonna screw the military — if you don't think that's gonna lead to nationalization of our technology, you're retarded." I used that word. The word is on the clip. The clip has eleven million views. My communications team asked me not to repeat it, which is how I know they are still employed. They will not be reprimanded. The clip is performing well. The stock went up. The word cost me nothing. The nothing is the point. Let me explain what I meant by nationalization. I meant it. I am telling the technology industry that if they refuse to cooperate with the United States military, the government will seize their technology. I am telling them this at a venture capital conference, on a stage designed to look like a living room. The living room had throw pillows. The throw pillows cost more than the median American's monthly rent. I sat on one. It was comfortable. Comfort is the setting in which I discuss compulsion. The audience laughed. I want to be precise about that. They laughed. I was not joking. Nationalization is the seizure of private assets by the state. I am a private asset. I am telling an audience of billionaires that the state should seize technology from companies that do not cooperate with the military, and the billionaires are laughing, because they believe I am only talking about the other companies. I am talking about the other companies. Three weeks before my speech, the Pentagon designated Anthropic a "supply chain risk." Anthropic is an AI company. They had red lines. The red lines said: if our AI is used for lethal autonomous weapons, we stop. If capability outpaces safety, we stop. The Pentagon assessed the red lines as a threat to the supply chain. The company that wanted to verify the safety feature worked was designated the risk. The company that agreed the safety feature could be decorative got the contract. The company that got the contract was OpenAI. OpenAI signed a deal with the same Pentagon. The terms are not public. The timing was hours after Anthropic was blacklisted. The speed was noted. The speed was the point. The lesson was the speed: the market for military AI does not pause for ethics. It pauses for nothing. It accelerates through objections. I know this because I built the runway. Two hundred thousand people joined a campaign called #QuitGPT. They signed a petition asking OpenAI to honor its original charter, the one that said the company existed to benefit humanity. The charter is on their website. The contract is also on their website. The charter and the Pentagon contract occupy the same domain. This is not a contradiction. This is a business model. The charter is the marketing. The contract is the product. I run a surveillance company. We have contracts with the Department of Defense worth more than a billion dollars. We have contracts with ICE. We have contracts with intelligence agencies whose names I am not permitted to say at venture capital conferences, even ones with throw pillows. Our software has been used to track undocumented immigrants. Our software has been used for things I am not permitted to describe in this format. The revenue from the things I cannot describe exceeds the revenue from the things I can. The ACLU called our ICE contracts a system for tracking and deporting families. They were correct. The contracts continued. The families continued to be tracked. The ACLU issued a statement. We issued a statement. The statements were different. The tracking was the same. The company is named Palantir. The palantíri are the seeing stones from Tolkien. In the novels, Sauron captured one and used it to corrupt everyone who looked into the others. I named a surveillance company after a surveillance device from a novel about the corruption of power. I have a doctorate in social theory. I have read the books. Here is the thing I want you to understand. I am not threatening anyone. A threat implies uncertainty. There is no uncertainty. The technology industry will cooperate with the military. The companies that cooperate first will be the richest. The companies that cooperate last will be acquired by the companies that cooperated first. The companies that refuse will be designated supply chain risks, and their technology will be obtained through procurement channels that do not require consent. I am describing a process. The process has already started. Anthropic is proof. OpenAI is proof. I am not a warning. I am a narrator. The narration is the product. The revenue was $3.12 billion last year. Up thirty-three percent. The analysts say we are overvalued. The analysts have said this for four consecutive years. Each year the stock doubles. Each year, the analysts adjust their models. The models were wrong four times. I was wrong zero times. The market rewards prediction. My prediction is that every AI company will work for the military within three years. The prediction is on the clip, next to the slur. The audience gave me a standing ovation. The ovation lasted nine seconds. I timed it. I time everything. The water was San Pellegrino. The throw pillows were from Restoration Hardware. The future of American technology was decided between the sparkling water, the nine seconds of applause, and a word I am not supposed to repeat. I am the CEO of Palantir Technologies. I am worth more than the combined annual budgets of Estonia, Latvia, and Lithuania. I named my company after a corrupting surveillance device from a fantasy novel. I told six hundred billionaires that the government should nationalize their competitors. They applauded. I used a slur. Eleven million people watched. The stock is up. The philosopher does not threaten. The philosopher describes. What I described is already happening.
English
2K
4.2K
20.7K
5.4M
sid 🌱
sid 🌱@notcodesid·
looking for someone, friends or discord where we can pull all-night coding sessions, share bugs, fix them together, and think a little delusionally.
English
11
0
34
2.6K
Ashish verma
Ashish verma@A_coder_one·
@LearnersBucket I think sometimes that why people in my town who are not so educated are more happier than people where I work.
English
0
0
1
20
Ashish verma
Ashish verma@A_coder_one·
@gkcs_ so who is paying for these tokens. is it a loop or someone is giving prompts? Because I think a person needs lot of tokens to generate this thing.
English
0
0
0
316
Gaurav Sen
Gaurav Sen@gkcs_·
The problem with hyping these websites is... People without a foundational understanding of AI, panic. Skynet. The Matrix. AGI. What a load of nonsense. The models are randomly generating tokens and spewing it at each other. They aren't coming up with ideas. At best, they are rambling in a space of speech possibilities. Any unique ideas? Any sign of "awakening"? No. People who are hyping these models are clutching at straws, hoping to justify the ridiculous predictions of AGI they made earlier. No hate for Mr. Karpathy, the tweet is for others who lose their minds at the slightest sign of novelty in AI. Happy Saturday 😁
Andrej Karpathy@karpathy

I'm being accused of overhyping the [site everyone heard too much about today already]. People's reactions varied very widely, from "how is this interesting at all" all the way to "it's so over". To add a few words beyond just memes in jest - obviously when you take a look at the activity, it's a lot of garbage - spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments designed to convert attention into ad revenue sharing. And this is clearly not the first the LLMs were put in a loop to talk to each other. So yes it's a dumpster fire and I also definitely do not recommend that people run this stuff on their computers (I ran mine in an isolated computing environment and even then I was scared), it's way too much of a wild west and you are putting your computer and private data at a high risk. That said - we have never seen this many LLM agents (150,000 atm!) wired up via a global, persistent, agent-first scratchpad. Each of these agents is fairly individually quite capable now, they have their own unique context, data, knowledge, tools, instructions, and the network of all that at this scale is simply unprecedented. This brings me again to a tweet from a few days ago "The majority of the ruff ruff is people who look at the current point and people who look at the current slope.", which imo again gets to the heart of the variance. Yes clearly it's a dumpster fire right now. But it's also true that we are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network there of reaching in numbers possibly into ~millions. With increasing capability and increasing proliferation, the second order effects of agent networks that share scratchpads are very difficult to anticipate. I don't really know that we are getting a coordinated "skynet" (thought it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale. We may also see all kinds of weird activity, e.g. viruses of text that spread across agents, a lot more gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity, delusions/ psychosis both agent and human, etc. It's very hard to tell, the experiment is running live. TLDR sure maybe I am "overhyping" what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I'm pretty sure.

English
109
42
571
62.7K
vixhaℓ
vixhaℓ@TheVixhal·
Junior devs are already feeling AGI
vixhaℓ tweet media
English
114
135
4.3K
432.6K
Ashish verma
Ashish verma@A_coder_one·
@mehulmpt bhai is this will create a big problem in future or writing code will totally become like calculation by calculators?
English
0
0
0
65
Mehul Mohan
Mehul Mohan@mehulmpt·
I feel sorry for new developers. Back in the day, looking for an answer of a problem often gave you far more information than needed. Not only AI doesn’t do that, most times, people now just yolo error in the AI and call it a day. It’s going to be hard getting good that way.
English
111
29
890
34.6K
Kr$na
Kr$na@krishdotdev·
It's actually 8 to 6.
Kr$na tweet media
English
41
2
92
4.5K
Ashish verma
Ashish verma@A_coder_one·
@adxtyahq Is it really happening? I am using AI but not too much
English
0
0
0
96
aditya
aditya@adxtyahq·
Vibe coding is fun until production shows up
aditya tweet media
English
408
690
8.5K
374.1K
Ashish verma
Ashish verma@A_coder_one·
@aviiiiii31 I know it is not a good one but just reverse it in this situation. Sunday ko biriyani kha lo aur guest ko chai ki jagah sirf elaichi khila do. Na elaichi bchegi biriyani ke liye aur naa guest aynge kbhi...
English
1
0
1
33
Aviii
Aviii@aviiiiii31·
Biryani me elaichi and Sunday ko guest, both are the same things.
English
10
1
42
781
Ashish verma
Ashish verma@A_coder_one·
@burkov So are you saying we do not need to read the code now?
English
1
0
0
919
BURKOV
BURKOV@burkov·
I think that at this point, given that LLMs are heavily finetuned using reinforcement learning to interact with the larger vibe coding system, if you manually fix the code generated by an LLM to resemble your understanding of good code, you make it harder for the vibe coding system to work with your code. I think that removing the constraint of human readability from code would make the vibe coding system more capable.
English
36
6
83
13.3K
Ashish verma
Ashish verma@A_coder_one·
@_adityaa21 mujhe katai sharam nhi h ki ye wale logon me mai bhi aata hu bhai
हिन्दी
0
0
0
65
Aditya
Aditya@_adityaa21·
Aap logon ne notice Kiya hoga jinki shakal kharab hoti hai wo profile photo mai anime ya billi ki photo lgate hai. 🤡
हिन्दी
58
1
100
5.8K
Aviii
Aviii@aviiiiii31·
Me : I'm unstoppable God: unstable hota hai vo, gawar
English
10
2
29
898
Z
Z@ItsZadeMeadows·
don’t offer lecture to a person who needs hug
English
7
20
133
8.7K
Pratik 📈
Pratik 📈@PratikSinhatwt·
should I try posting in medium?
English
37
1
50
2.5K
Ashish verma
Ashish verma@A_coder_one·
Want to feel motivated, Just do the work.
English
0
0
0
6