โป
2.4K posts


@louieponto Probably because they are slang heavy. Thai X’ers usually write like that
English

Un científico de Yemen mostró un concepto de prisión de IA del futuro
Cognify propone encerrar a los criminales en cápsulas especiales y reeducar sus cerebros con falsos recuerdos generados por redes neuronales. El resultado son nuevas personas que no querrán violar la ley.
¿Qué dicen ustedes?
Español

@stats_feed Thailand is actually smaller than France. You make it look much bigger. This is real or not I am not sure
English

สะเทือนใจ! ชาวบ้านพบเศียรพระพุทธรูปโบราณเนื้อหินทราย อายุกว่า 100 ปี ถูกคนร้ายลักลอบตัดออกจากองค์พระ ทิ้งไว้โคนต้นไม้ริมถนนนเรศวร ด้านหลังโบราณสถานวัดมหาธาตุ
อ่านข่าว : ch3plus.com/news/social/mo…
#เรื่องเล่าเช้านี้ #ครอบครัวข่าว3 #ข่าวภูมิภาค #อยุธยา

ไทย

After seeing that Claude Mythos marketing turned out to be, as expected, a scam, I wanted to make a master list of tricks being used to market LLMs.
The master list includes statements directly from leadership in the companies or from the "organic marketing" of people on social media, along with an explanation on how the scam works. This is my first attempt, so likely incomplete.
The LLM Marketing Scams Master List v1:
"Two more weeks" - the models will be good enough someday soon to do what we claim.
"They're already good enough" - the models are already good enough to replace workers, but it hasn't happened yet because of x y z reasons.
"We just built God in the backroom, and no, you can't see it" - the models they built in private are actually capable of doing the things we have been waiting for, but they can't let us see them yet for x y z reasons.
"Actually they already have replaced jobs" - the layoffs that tech companies have been doing, citing AI as the reason, have already been replaced with current LLM tech, ignoring market conditions and past data on layoffs during such conditions.
"You just don't know how to use then as well as me" - the models are good enough, but esoteric prompt engineering is required to get these results, and no, I won't teach you.
"I built an app making big money with LLMs" - they claim they already have made startup companies, almost always SaaS companies, that are making them tons of money, but when you ask to see them, they won't show you.
"You aren't using the right model" - claims that you must be using the wrong model and need to use Open Claude 420b-parameter Gemini Plus Pro 6.9 with 4RealThisTime HomerSimpson agent mode enabled. Note that this will be used to attack every study on the effectiveness of LLMs, since studies take time to complete and publish, with new models releasing more frequently than it's possible to complete and publish a study
"You're falling behind" - claims that you need to use the bots now, even though they aren't good enough to fully automate any jobs, because otherwise, when the bots are good enough, you will lose your natural English skills required to prompt effectively.
"All these companies are using LLMs, so do you think you know better than they do?" - pointing to claims of large companies deeply invested in LLMs being a success saying that LLMs are being used effectively, with no viewable results in the speed and/or quality of their company's output.
"The benchmark score went up" - claiming improvements on the benchmarking tests given to their latest model, despite the training being specifically tuned to improve on these tests, and then conflating better benchmark scores with actually being more able to automate jobs or drastically improve worker productivity.
"It can now count the letters in Strawberry/can now do things it famously couldn't do previously" - saying that it can now count the letters in Strawberry or instruct you on how to use a cup without a bottom, etc. is often done to suggest increased reasoning for the LLM, but often involves just hard coding an answer into the service.
"It has escaped our control" - saying that they cannot control the LLM, implying it is conscious or living to some degree when really it just said words that it wasn't supposed to or an agent used an app that wasn't intended by the user's prompt when next-token predicting
"It's feeling sad/scared/happy/angry, suggesting it is conscious" - they ask the LLM how it is feeling, and it next-token predicts a response that includes an emotion felt by humans, since training data is from human conversations online.
"Costs are going down/the LLM service is profitable" - ignores training costs and capex for hardware, usually just referring to inference being profitable, which isn't even true in many cases. Training and capex is 95%+ of the total costs to serve the models.
Did I miss any?

English

ถ้าเขียนว่า ดากเลดี้ อาจไม่สุภาพ
เฮชทูโอเอง| CQ9_SUN D58@HtoO_office
ป็นการทับศัพท์ที่เห็นละท้อมาก ด๊ากสุดๆ
ไทย

ปกติสวนยางนี่เจ้าของสวนกับคนกรีดแบ่งเงินกันคนละครึ่งใช่ป่ะ ถ้าใช้ตัวนี้ก็ไม่ต้องแบ่งเงินคนกรีด เจ้าของสวนกินเต็ม
คือผมต้องทราบไหม 🐈⬛@pizad_sura
โหม๋ภาคใต้ ทำพรือนิ
ไทย

#ทรัมป์ ประกาศว่ากองทัพเรือสหรัฐ เริ่มปฏิบัติการควบคุม #ช่องแคบฮอร์มุซ โดยมีผลทันที หลังการเจรจาในปากีสถานล้มเหลว
จุดประสงค์หลักคือ ป้องกันไม่ให้อิหร่านเรียกเก็บ “ค่าผ่านทาง”ที่อิหร่านเรียกจากเรือบรรทุกน้ำมันราว 2 ล้านดอลลาร์ต่อลำ ซึ่งสหรัฐฯ มองว่าเป็นการกรรโชกทรัพย์.และผิดกฎหมาย
สหรัฐฯ จะตรวจสอบและขัดขวางเรือที่จ่ายเงินให้อิหร่าน เพื่อบังคับให้ช่องแคบเปิดอย่างเสรีและปลอดภัย โดยไม่มีอิหร่านควบคุมหรือเก็บค่าผ่าน

Ngio Rai, Thailand 🇹🇭 ไทย

crazy theory of who is in the bunny suit
Power Tie@realPowerTie
President Trump and the Easter Bunny in the situation room.
English
โป retweetledi





















