Alex

7K posts

Alex

Alex

@sashock

Katılım Mayıs 2008
227 Takip Edilen110 Takipçiler
Alex retweetledi
James Ivings
James Ivings@JamesIvings·
Someone went through 6 months of @ProductHunt launches and 97% of them failed! This shit is hard, don't beat yourself up 🥹
James Ivings tweet media
English
53
33
572
149.4K
Alex retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
Brace yourself for the future where the robots go rogue
English
251
481
3.9K
498.2K
Alex
Alex@sashock·
@LerSfy @PierreDeWulf je suis surpris, j'avais vu un article qui disait que ca representait seulement 0.5% du market share de la recherche en ligne. en gros ils ont de quoi voir venir.
Français
0
0
1
12
Pierre de Wulf
Pierre de Wulf@PierreDeWulf·
We’re closer to the end of Google’s search monopoly than we think. ScrapingBee now gets more organic traffic from ChatGPT than from DuckDuckGo. Next month? It’s set to overtake Bing. If the exponential continues, by next year it could rival Google.
Pierre de Wulf tweet media
English
5
7
65
7.3K
Alex
Alex@sashock·
@LerSfy Je ne pense pas, même dans le cas de la russie, ils ont rien confisqué à l'annexion de la crimée, certes il y a eu des sanctions. Mais même plus récemment confisquer des biens va à l'encontre du droit international, au pire ils gelent et prennent les interets.
Français
0
0
1
35
Alex
Alex@sashock·
@LerSfy hello, fais tourner le CV on sait jamais
Français
0
0
1
15
Alex
Alex@sashock·
bonjour @Indeedfrancais d s votre campagne sur les podcasts récente me stresse car on entend «rendez vous sur bla-bla/podcaDsfr » dans le lien, je me suis dit que ça devait être « podcastsfr » mais en fait ni l’un ni l autre marche. go.indeed.com/podcastsfr
Français
0
0
0
6
Alex retweetledi
John Rush
John Rush@johnrushx·
I'm building the same app using all popular AI IDEs. Their progress is insane > Replit got way better at UI > Cursor shipped coding agents > Windsurf entered the game and made a nice > V0 can do full-stack apps > Bolt is turning noncoders into coders Mega thread on AI IDEs🧵:
English
95
342
4K
827.6K
Alex
Alex@sashock·
@RobynFivush hello i listened to the hidden brain podcast, found it very interesting, I wonder if your questionnaire with 22 questions for children is available publicly ?
English
0
0
0
196
Alex
Alex@sashock·
@smlpth i see all the issues mentioned but his example makes you wonder what kind of company « taking users money » would push this kind of code to prod without noticing, a one person side project ? if you remove the AI from the equation, what about code reviews, unit testing QA?
English
0
0
1
26
Samuel Path
Samuel Path@smlpth·
This is also my experience. Learning to make the most of these AI coding assistants is not easy, and we're still in the early days. A few nasty bugs got introduced in our codebase because of Cursor. We'll see long term if the time saved is greater than the time wasted debugging.
Mayo Oshin@mayowaoshin

Imo the @cursor_ai + claude sonnet ai coding hype is blown out of proportion. As an early adopter and heavy user of cursor (at least 1,000 hours so far), here are 3 major issues I've noticed over the past couple of months: 1. The first generated output(s) often contains subtle bugs that could cost you a ton of time and money Most cursor demos I see on my timeline are focused on the UI, popular frontend frameworks and basic backend auth/api. These applications can afford to make mistakes and most aren't deployed to live paying users. But if your application is deployed to production and utilises complex or critical backend logic (i.e. payments processing), subtle bugs begin to emerge. For example, cursor ai generated the payment order manager class in the image attached below. At first glance it may look good for production, but upon close inspection you'd notice that the `totalPrice` of the order isn't updated when a product is removed. As a result, the system will charge customers incorrectly leading to loss in customer trust, cascading errors, and potential lawsuits. The same issue occurs when "refactoring" code using AI or when Cursor attempts to auto-fix problems for you. Often, changes are made to the original codebase that add hidden bugs to the logic. As a result, a lot of time can be wasted reviewing and refactoring AI generated code. Yes I know.... tests should be run before deployment to catch AI bugs, but let's be honest, most devs don't have the energy or discipline to unit test every commit. 2. Inconsistent quality of outputs If you provide the same prompt several times to the chatbot, you may end up with drastically different solutions to the problem. This can happen within the same session of usage or days/weeks apart. In fact, if you copy a previously generated AI solution and ask the cursor chatbot to "review the code for bugs", 9/10 times it finds something wrong. In addition, if you ask the AI any questions that contain suggestions or alternative solutions, it will apologize and refactor the entire code again. Example:- User question: Thank you for your solution. Is it better to handle the payment orders using a Map function or should I use something like Redis? AI response: "You're absolutely right, and I apologize for overlooking that crucial aspect." 3. It can significantly increase technical debt Even when the AI generates a "good" code block solution, it doesn't take into account the entire software design and architecture of the application. Due to the lack of a holistic perspective, problems emerge includes inconsistent error handling, modularity, and data modelling across the app. As a result, the short term quick fix often leads to scalability, performance, and maintenance issues over the long run. Once your codebase has grown to a large, complex web of messy components, refactoring will be a long, painful and costly process. TLDR: Cursor AI (or AI coding in general) is a useful autocompletion tool that can boost your development productivity in the short run, but in the long run, it can waste significant time and energy IF you don't thoroughly review generated outputs. In my experience, Cursor AI is best used as a "junior developer" who often makes mistakes and you have to review their work carefully to "guide" them correctly. If however, you simply "trust" the AI outputs due to lack of knowledge, skill, or willingness to review results, the long term damage will outweigh the initial productivity gains you got so "hyped" about.

English
2
0
5
1.2K
Alex retweetledi
Kishan
Kishan@jst_kishan·
Why pay for Claude, when I can get my code written by amazon.
Kishan tweet media
English
202
1.2K
21.5K
1.1M
AppleLeaker
AppleLeaker@LeakerApple·
@sashock Ask Samsung how that’s working out for them
English
1
0
36
718
Alex
Alex@sashock·
@_mcorbin C'est normal je pense, question de product/market fit, personne veut payer un dev qui va bosser sur des tickets de clients qui payent 100e par mois, alors qu'un grand compte va te financer la moitié de ton équipe
Français
0
0
0
140
Alex retweetledi
nixCraft 🐧
nixCraft 🐧@nixcraft·
Truth 😅
nixCraft 🐧 tweet media
English
166
916
11.1K
545.7K
Alex
Alex@sashock·
@smlpth btw I love reading comments on topics that I'm relatively unfamiliar, always learn something, comments is the best thing on the internet
English
0
0
1
4
Samuel Path
Samuel Path@smlpth·
Desperately trying not to think too much about work during my weekend. And here I am thinking about my colleagues’ framework used to build the new ChatGPT website 😅.
Samuel Path tweet media
English
1
0
4
1K
Alex retweetledi
Daniel
Daniel@growing_daniel·
life hack: if you do not have an API key for a service or you cannot afford to run it simply type the name of the service with "api_key" after it and copilot will provide you one free of charge
Daniel tweet media
English
179
2K
40.5K
3.8M