Drew.01🍳

1.7K posts

Drew.01🍳 banner
Drew.01🍳

Drew.01🍳

@wiseb0y

Architector

Universe Katılım Nisan 2011
4.2K Takip Edilen379 Takipçiler
Sabitlenmiş Tweet
Drew.01🍳
Drew.01🍳@wiseb0y·
@h_Artt_ The artist is an art herself, just look at her beauty and charm! Dear Kate, do you mind if i suddenly fall in love in your arts, then instantly was charmed of your beauty! If your heart is somehow still open, if you are single - suggest it as serious intentions!♥️ Stay a goddess!
English
0
0
1
57
Drew.01🍳 retweetledi
Elon Musk
Elon Musk@elonmusk·
Just type “add a girlfriend” to any video on the new Grok Imagine
English
11.5K
11.8K
100.5K
14.2M
Drew.01🍳 retweetledi
Drew.01🍳 retweetledi
Netokrypto
Netokrypto@Netokrypto·
Top 25 @decentraland Creators Decentraland is a community driven virtual world on Ethereum where people connect, explore, and build. Here are the top 25 on the leaderboard. Follow them for quality $MANA content. If you are on this list, you are a legend 🫡 1. @kimcurrier Head of Partnerships and Marketing at Decentraland. Shares insights on growth, collaborations, and marketing. Follow for early partnership news and engagement tips. 2. @CanessaDCL Long-time Decentraland citizen, co-founder of #spacehost and #AKCB. Builds inclusive and creative communities. Follow for history, projects, and women-led initiatives. 3. @roustan Web3 artist merging body art with the metaverse. Runs a personal gallery space. Follow for unique art shows and creative lessons in DCL. 4. @VTATV_ETH Blockchain reporter and activist linked to NFT communities. Covers global events and projects. Follow for video reports, live coverage, and networking. 5. @glamonchain Visual artist and technologist focused on AI and NFTs. Explores futuristic art and fashion. Follow for creative styles and metaverse fashion insights. 6. @santisiri Tech storyteller since 2011. Writes about Bitcoin, UBI, and technology narratives. Follow for big-picture takes on DCL in tech. 7. @DJvAPED House and techno DJ and producer. Blends music with NFTs across communities. Follow for party updates and music collaborations in DCL. 8. @Oh_lohi Web3 fashion creative in DCL Fashion Village. Known for playful wardrobe design. Follow for styling tips and fashion events in DCL. 9. @BayBackner XR artist and Head Curator at Decentraland. Teaches at Berklee Valencia. Follow for art week curation and XR design insights. 10. @0x_CryptoAu Narrative-driven degen and early trend spotter. Shares insights on land flips and alpha. Follow for sharp views on DCL narratives and investments. 11. @PpaPpa152417 NFT art enthusiast in 3D and AI. Creates playful and community-driven projects. Follow for colorful experiments and 3D showcases. 12. @Meryshark CEO of Facemoons and 3D designer. Builds wearables and architecture in DCL. Follow for custom designs and project inspiration. 13. @em_DCL Collector across Azuki, CloneX, and Dollhouse. Bridges Web3 wearables with Roblox. Follow for wearable insights and collection breakdowns. 14. @CheddarQueso3D POAP collector and HerDAO ambassador. Active DAO contributor. Follow for event recaps and DAO entry points. 15. @Rizkgh Award-winning architect and Verified DCL Partner. Architect of District X. Follow for district planning and certification guidance. 16. @AKCMetaBeast AI storyteller in the metaverse. Connects narratives across projects. Follow for creative AI-driven stories in DCL. 17. @NFTland Founder and event host with pirate flair. Runs The WIP Meetup and #rizzlefest. Follow for events, memes, and land strategies. 18. @Mastawainzz Social goblin with Unity skills. Adds humor to the DCL community. Follow for funny takes and helpful tools. 19. @Aeon_Smash Mysterious figure in the metaverse. Known for cryptic and timeless posts. Follow for thought-provoking ideas in DCL. 20. @Batearn Community Engineer at Brave. Leads the BAT ambassador program. Follow for growth strategies and metaverse engagement. 21. @jenmarieinc Web3 creator and strategist. Strong ties to art and DAOs. Follow for art spotlights and content strategies. 22. @iraxlab Founder of Spatial.io and AI filmmaker. Expert in metaverse design. Follow for AI-driven events and cinematic worlds. 23. @Crypt0M1notaur DJ streaming metaverse sets in DCL. Hosts plaza parties and nightlife events. Follow for live sets and music vibes. 24. @officialcubenft Digital artist with geometric creations. Art inspired by life and shapes. Follow for art drops and creative motivation. 25. @Jambert91 Musician sharing abstract soundscapes. Focuses on groove and ambience. Follow for music that enhances DCL events.
Netokrypto tweet media
English
8
10
29
2.1K
Drew.01🍳 retweetledi
CoinSniper
CoinSniper@shweshwe4729·
I'm applying for the @xos_labs whitelist. The XOS L1 brings back the PoW era of ETH: 20s blocks, ASIC resistant, low storage. It even runs on Raspberry Pi. Token: X Apply now: x.ink/JS2LYA
English
0
1
1
9
Drew.01🍳
Drew.01🍳@wiseb0y·
@cute_Alexandra2 Идеально всё! Эти ножки не могут не цеплять!
Русский
0
0
2
162
Alexandra_02
Alexandra_02@cute_Alexandra2·
У меня красивые ножки, 36 размера, ровные пальчики и сделанный педикюр) кого-то цепляет это в девушке ??
Alexandra_02 tweet media
Русский
166
17
1.1K
73.1K
Drew.01🍳 retweetledi
Klink Finance
Klink Finance@klinkfinance·
$1,000 $KLINK Giveaway 💰 We’re running a Guess the CEX campaign! Think you know where $KLINK will be listed? Here’s how to enter: 1️⃣ Tag 1 CEX you think $KLINK will get listed on 2️⃣ Tag 1 Crypto KOL who needs to know about $KLINK 3️⃣ Tag 1 friend to join the campaign 🔗 Winners: 4 winners will be announced in October #klink #TOKEN2049
Klink Finance tweet media
English
2.4K
2.3K
4.3K
223.2K
Drew.01🍳 retweetledi
Aparna Dhinakaran
Aparna Dhinakaran@aparnadhinak·
Prompts, like models, should improve with feedback — not stay static. Here’s how prompt learning works: 1️⃣ The prompt is treated as an online object — something that evolves over time 2️⃣ A LLM (or human) provides an assessment and an English natural language critique, unlike most prompt optimization methods 3️⃣ That natural language feedback is used as an error signal, passed into a MetaPrompt 4️⃣ The MetaPrompt updates the original prompt — either by rewriting it or inserting targeted instructions into specific sections English feedback becomes the learning signal.
Aparna Dhinakaran tweet media
English
1
3
37
4K
Drew.01🍳 retweetledi
Aparna Dhinakaran
Aparna Dhinakaran@aparnadhinak·
Reinforcement Learning in English – Prompt Learning Beyond just Optimization @karpathy tweeted something this week that I think many of us have been feeling: the resurgence of RL is great, but it’s missing the big picture. We believe that the industry chasing traditional RL is going the wrong direction. In chasing better policies and reward shaping, it’s easy to miss a simpler tool we already have: language. Today, we’re releasing our first research on Prompt Learning — an approach that uses natural language feedback to guide and improve agents. It’s not prompt tuning, chain-of-thought prompting, or DSPy Simba — though we love what the @DSPyOSS team is building. Instead of adjusting weights, we use MetaPrompting — where English evals & critiques (rather than just scalar metrics like previously done by the industry) drive targeted prompt updates. Tagging people who would find this interesting: @chengshuai_shi @ZhitingHu @HamelHusain @sh_reya @charlespacker @eugeneyan @swyx @dan_iter @sophiamyang @AndrewYNg @lateinteraction @cwolferesearch @tom_doerr @imjaredz @lennysan @shyamalanadkat @aakashgupta @apolloaievals @jerryjliu0 @joaomdmoura @jxnlco @abacaj @garrytan
Aparna Dhinakaran tweet media
Andrej Karpathy@karpathy

Scaling up RL is all the rage right now, I had a chat with a friend about it yesterday. I'm fairly certain RL will continue to yield more intermediate gains, but I also don't expect it to be the full story. RL is basically "hey this happened to go well (/poorly), let me slightly increase (/decrease) the probability of every action I took for the future". You get a lot more leverage from verifier functions than explicit supervision, this is great. But first, it looks suspicious asymptotically - once the tasks grow to be minutes/hours of interaction long, you're really going to do all that work just to learn a single scalar outcome at the very end, to directly weight the gradient? Beyond asymptotics and second, this doesn't feel like the human mechanism of improvement for majority of intelligence tasks. There's significantly more bits of supervision we extract per rollout via a review/reflect stage along the lines of "what went well? what didn't go so well? what should I try next time?" etc. and the lessons from this stage feel explicit, like a new string to be added to the system prompt for the future, optionally to be distilled into weights (/intuition) later a bit like sleep. In English, we say something becomes "second nature" via this process, and we're missing learning paradigms like this. The new Memory feature is maybe a primordial version of this in ChatGPT, though it is only used for customization not problem solving. Notice that there is no equivalent of this for e.g. Atari RL because there are no LLMs and no in-context learning in those domains. Example algorithm: given a task, do a few rollouts, stuff them all into one context window (along with the reward in each case), use a meta-prompt to review/reflect on what went well or not to obtain string "lesson", to be added to system prompt (or more generally modify the current lessons database). Many blanks to fill in, many tweaks possible, not obvious. Example of lesson: we know LLMs can't super easily see letters due to tokenization and can't super easily count inside the residual stream, hence 'r' in 'strawberry' being famously difficult. Claude system prompt had a "quick fix" patch - a string was added along the lines of "If the user asks you to count letters, first separate them by commas and increment an explicit counter each time and do the task like that". This string is the "lesson", explicitly instructing the model how to complete the counting task, except the question is how this might fall out from agentic practice, instead of it being hard-coded by an engineer, how can this be generalized, and how lessons can be distilled over time to not bloat context windows indefinitely. TLDR: RL will lead to more gains because when done well, it is a lot more leveraged, bitter-lesson-pilled, and superior to SFT. It doesn't feel like the full story, especially as rollout lengths continue to expand. There are more S curves to find beyond, possibly specific to LLMs and without analogues in game/robotics-like environments, which is exciting.

English
29
147
1.1K
177.4K
подушка
подушка@sofitame·
Чего хочется одинокой девушке? #нюдсочетверг
подушка tweet media
Русский
104
47
1.3K
60.2K
Drew.01🍳 retweetledi
ᴋᴀᴛᴇ.
ᴋᴀᴛᴇ.@h_Artt_·
🖤SOLD OUT🖤 I’m speechless!🖤 Awwwwww! Amazing and beautiful @Hitokirivon made it happen!🖤 I can't describe in words how I appreciate your support🖤 thank you soooooooo much!🖤 you saw the depth and meaning in this…🖤 I’m soooo happy now!🖤 ~ loooooove ~
ᴋᴀᴛᴇ. tweet media
English
5
2
10
209
Drew.01🍳
Drew.01🍳@wiseb0y·
@stacexex вполне себе, довольно точно, красив и в мыслях я испорчен, в самом соку , сладок оч!
Русский
0
0
0
16
StaceX
StaceX@stacexex·
@wiseb0y А ты точно красивый и молодой?
Русский
1
0
1
30
StaceX
StaceX@stacexex·
Я красивая и тупая, задавайте вопросы
Русский
2
0
4
1.4K
Drew.01🍳 retweetledi
Margo Margarita
Margo Margarita@AckermannMarga2·
– Скажите, Шура, честно, сколько вам нужно денег для счастья?
Русский
31
82
903
40.9K
Drew.01🍳 retweetledi
НастасьЮрьна
НастасьЮрьна@AnaZlato·
привет не хочешь стать моей гиперфиксацией кому я буду слать фотки в попытках добиться толики внимания а ты просто будешь меня сутками игнорировать
НастасьЮрьна tweet media
Русский
81
16
1.1K
42.2K