Thomas Morselt

2.1K posts

Thomas Morselt banner
Thomas Morselt

Thomas Morselt

@thomasmorselt

Vertaald data & AI naar meer omzet, efficiëntere processen en waardevolle inzichten.

Utrecht Katılım Ağustos 2009
554 Takip Edilen284 Takipçiler
Thomas Morselt
Thomas Morselt@thomasmorselt·
@bramk Processed review in one shot? Any learnings on what made your app pass?
English
1
0
0
79
Bram Kanstein
Bram Kanstein@bramk·
Just got my first iOS app approved in the App Store! 🎉
Bram Kanstein tweet media
English
39
2
269
10.9K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@pixel_updates Is it available in Europe? I setup my sports teams but haven't seen the teams when they were playing
English
2
0
0
190
Pixel Updates
Pixel Updates@pixel_updates·
Live sports updates in At a Glance widget is pretty cool.
Pixel Updates tweet media
English
14
3
249
8.5K
Punter Phil⚽️
Punter Phil⚽️@punter_phil·
🏴󠁧󠁢󠁥󠁮󠁧󠁿 Premier League Predictions (Weekend) Share yours below ⬇️
Punter Phil⚽️ tweet media
English
15
48
675
40.4K
Quintin Schevernels
Quintin Schevernels@Quintin24·
Zoon Mats houdt ons goed op de hoogte over de condities ..... 🇦🇹
Quintin Schevernels tweet media
Nederlands
3
0
1
388
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
Official: Google has added Opal to the Gemini web app as an experimental Gem, letting users build reusable AI powered mini apps from the Gems manager.
AshutoshShrivastava tweet media
Google Labs@GoogleLabs

Opal, meet @Geminiapp. 🤝 We’ve now brought Opal, our tool for building AI-powered mini apps, directly into the Gemini web app as an experimental Gem. You can find the Opal Gem in your Gems manager and start creating reusable mini apps to unlock even more customized Gemini experiences. Go forth and customize! Learn more here: blog.google/technology/goo…

English
13
32
496
45.4K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Reply here or DM me :) will add folks in as much as we can
English
2.2K
13
943
78.5K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
Big upgrade to vibe coding in @GoogleAIStudio lands in Jan, but if you want to test early… 👇🏻
English
3.8K
190
5.5K
553.5K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@kimmonismus I mean, most people don't even get to use deep research. That is the main target audience of Microsoft and Copilot.
English
0
0
0
245
Thomas Morselt
Thomas Morselt@thomasmorselt·
@joshwoodward Rightfully so, this growth. Hockeystick is ready for 3.0. But what is it with AI teams and graphs?
Thomas Morselt tweet media
English
1
0
2
100
Josh Woodward
Josh Woodward@joshwoodward·
🚀 Let's go!
Josh Woodward tweet media
English
91
103
1.6K
146.3K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@darnocks @chamath Firebase Studio does a nice job when adding in cloud hosting on GCP and firebase database with authentication, writing and reading data from a database. But you are hooked on GCP
English
0
0
1
50
David Arnoux
David Arnoux@darnocks·
i totally agree, thing is we've seen this exact pattern with hundreds of devs. that wall you're hitting? it's because ai-generated code scores 2/10 when reviewed by senior devs. works for demos, explodes with complexity. we tracked it: only 15% of ai code survives to production unchanged. the other 85% gets rewritten when you add that "necessary complexity" - the hover states, transitions, all those tiny details. one founder spent 3 weeks just trying to add payments to his lovable app. authentication was in the frontend. passwords in plaintext. total chaos under the hood. the problem isn't the ai - it's that it has no architectural context. it's been trained on millions of examples of bad code. so when complexity increases, it doesn't know WHERE to put things. we built builderbox to fix this. instead of another boilerplate, we give your ai perfect architectural instructions. database patterns, component structure, where business logic belongs. suddenly your ai writes 8/10 code from the start. that founder stuck for 3 weeks? fixed payments in 45 minutes. you keep the cursor workflow you like - narrow context, approve edits - but now the ai actually knows what good architecture looks like. the vibe zone expands when your ai has a map.
English
1
0
0
220
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
It’s unpopular to say, but it’s true: You can’t vibe anything useful rn. You can barely vibe a working product. And there are still virtually zero examples of anything even moderately useful/successful that was vibed - especially considering how much money has been spent so far on LLM calls to try and do so. This won’t be true forever but is true today. As with other tech cycles, a trough of disillusionment may soon come as folks get frustrated and give up. In the meantime, trying a more team oriented approach to building may result in better outcomes. Our Software Factory was extracted from our work making useful software for large, demanding enterprises. It’s built for teams to be able to build production software together. Alpha users rolling through now. Beta in Sep and GA in October. 8090.ai/waitlist
Tom Johnson@tomjohndesign

This is how I feel about vibe coding. Any project I try that has any kind of complication has this immediate burst of progress. Things are amazing and it feels like a superpower. Then... as I add more complexity, things crash to a halt. The only projects that I think I can create are ones that fall in this "vibe zone". Prototypes, UIs, products—anything that's simple and has low complexity fits right in that zone. Proof of concepts, interactions, stuff like that. The tools are able to make things that fit in that slot. But. Everything falls to pieces as that complexity curve increases. And the problem is that any good product design process has increasing complexity. A basic prototype turns into a good prototype as soon as it has layered interactions, transitions, good affordances, hover states, 1000 tiny little details that make something feel correct and real. The benefit of vibe coding is supposed to be that you move fast and you can whip things out—letting AI do all the work for you. The problem is it loses steam as soon as the necessary complexity is added. It keeps redoing itself, rewriting code, affecting things that are unrelated and then causing other issues. But if you add that complexity, every vibe coding session quickly turns into a whack-a-mole bug-bashing session. I'm not sure the solution to this. With traditional prototyping the solution is to duplicate, add more complexity, create more frames/scenes, tweak, fork, etc. However with vibe coding, one little prompt can destroy literally everything. There's a stage where I end up walking on prompt eggshells-- trying not to give it too much or too little context so that it doesn't go rogue and break everything. There's only a few exceptions to this. @cursor and @framer. I can make great progress with Cursor, give it narrow context, and I have to approve the edits that it makes. This feels like a correct workflow. The problem is, I can't see the thing that it's making because it's an IDE, not a visual environment. Yes, I can create local builds and refresh my browser and all that kind of stuff. But the visual aspect is totally lost from the coding experience. It's a developer tool. Framer gets this right because it only allows narrow updates within a single component on the page. Yes, it's limiting because it can only do a single thing at once, but at least it's not trying to create the entire page from scratch and manage it all through a prompt interface. These seem like the right approach. @Cursor: Allow the AI to edit anything but allow the user to approve those edits and see them in context. @Framer: Allow the AI to only narrowly edit a single file or component to keep the complexity down to a minimum and reduce catastrophic edits. I'm optimistic that tools like @Figma, @Lovable, @Bolt, and @V0 can make cool prototypes, but I just keep running into walls when it comes to doing anything more than just a basic interaction prototype. They need to do less IMO. Hopeful that those tools add more controls that are in the same line as Cursor and Framer. I'll also add that this is similar to how we do it with @Basedash chart generation as well. But we're not a vibe tool in the normal sense so the parallels are a little bit harder to draw.

English
285
123
1.3K
695.9K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
@thomasmorselt Yes, seems to undercut their various levels of models so this eas clearly the target.
English
1
0
1
63
Kol Tregaskes
Kol Tregaskes@koltregaskes·
I'm disappointed that GPT-5 is text- and image-only, with no audio or video input/output. The model page mentions image generation via tools, so must be 4o image. I wasn't expecting it but hoped for image v2. I'll stick with Gemini 2.5 Pro for video and audio uploads. The October 2024 training cutoff suggests pre-training was long ago, with performance gains from post-training. This is fine, as models have web access, so a recent cutoff would only be a bonus. Overall, it feels like a GPT-"4.5" - more "cool" than "wow." It's a decent improvement, with better reliability and instruction following, addressing key bottlenecks Routing issues will be resolved in the coming days, weeks, or months I'm sure, but the Thinking toggle helps and apparently "think hard" prompting works. Pricing is competitive, undercutting Gemini models in respective variants, and it's fast (I like the "quick answer" feature). They're likely compromising to get it out to the masses and cutting costs. And the routing problems were always going to be the big problem; give them time, and I'm sure they'll fix it. Overall, though, this is a good model, but it was never going to reach everyone's crazy expectations. Thank you @openai, but we need a GPT-5o now. ;-)
Kol Tregaskes tweet media
English
5
1
33
2.2K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@koltregaskes Have it on Plus account in 🇳🇱. First thoughts; Unlocks a lot of very specific use cases, but needs even more specific instructions for reliable outcomes.
English
0
0
3
103
Kol Tregaskes
Kol Tregaskes@koltregaskes·
ChatGPT Agent now rolling out to OpenAI's Team plans it seems. Anyone who still doesn't have it? Anyone who does, what are your thoughts and use cases? All Pro users should have it, Plus started rolling out a couple of days ago.
Kol Tregaskes tweet media
Irvin@Skillzx5_

@btibor91 @koltregaskes Agent mode is now available for Teams plan in America via iOS app.

English
17
5
78
7.6K
Kol Tregaskes
Kol Tregaskes@koltregaskes·
Oh yeah! ...though I have been able to download and install the browser already. I'm wondering if this unlocks more features?
Kol Tregaskes tweet media
English
37
6
144
15.6K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@Quintin24 Ik hoop veel, want het zal iets neutraler zijn dan de social media feeds
Nederlands
0
0
0
22
Quintin Schevernels
Quintin Schevernels@Quintin24·
Hoeveel mensen zouden voor de volgende verkiezingen met AI hun “eigen” stemwijzer maken?
Nederlands
5
0
3
617
Jeremiah Owyang
Jeremiah Owyang@jowyang·
My final slide from yesterday's keynote at AI Realized conference on where I think the AI agents space is going in regards to enterprises: Yes, your future customer/partner/employee will be an AI Agent.
Jeremiah Owyang tweet media
English
4
7
15
2.9K
Thomas Morselt
Thomas Morselt@thomasmorselt·
@btibor91 o3 DR works very well with Excel/CSV sheets. Can create comprehensive reports because o3 has Python functionality. Had it analyse and create a report on one single Excel sheet for 26 minutes with very good results.
English
0
0
1
108
Tibor Blaho
Tibor Blaho@btibor91·
Summary of my findings after comparing more Deep Research reports (these are my personal opinions based on my own tests and experience) - No perfect solution yet, since all current deep research tools have limitations and make occasional errors, so you need to verify the outputs, because relying on any single solution could lead to incomplete or biased research results - Combined approach works best in my opinion, because the best way is to use multiple tools in parallel, then validate, verify, compare and combine all findings to get what you need, which saves a lot of time compared to traditional searching and browsing - 1. ChatGPT Deep research (o3) is probably the best one right now, because it goes into a lot of detail with very powerful search capabilities, offering exceptional depth, best-in-class search, most comprehensive for critical research, and consistently provides detailed reports with solid references - 2. Gemini Deep Research with Google search expertise delivers quality results, and is available even for free accounts despite high capability, but the writing style is a bit verbose like a scientific wall of text, and it performs strong in technical topics - 3. Grok DeeperSearch offers efficient, quick, focused research with a good balance of speed and accessibility, and while it's not as detailed as ChatGPT or Gemini, it's faster and performs well for time-sensitive research, making it great for initial exploration - 4. Claude Advanced research is only available for Max, Team and Enterprise accounts for now, and while it takes its time with sources, the reports are less detailed, with consistent but not outstanding performance - 5. Perplexity gives more surface-level analysis with limited depth on complex questions, and lacks the full capabilities of the others, making it not enough for detailed research - All tools already provide mostly up-to-date info, good relevance to research questions and decent structure in most reports, but the biggest differences are in depth, evidence quality and how transparent the methods are - Always verify claims across multiple platforms, check citation quality, use focused and detailed research prompts and compare outputs before making important decisions
Tibor Blaho tweet media
English
35
53
316
176.2K