yipclouds jedi

685 posts

yipclouds jedi banner
yipclouds jedi

yipclouds jedi

@yipclouds

peace, meditation, web3, founder https://t.co/VMmSyCOhS7 CEO

Bergabung Mayıs 2021
1.6K Mengikuti1.5K Pengikut
yipclouds jedi me-retweet
Luuu
Luuu@Just_Luuuu·
Dear Polkadot community, here is something I wanna share with you about @ChaoticApp. All in this article: @Luuuuu/c285c2d724f4" target="_blank" rel="nofollow noopener">medium.com/@Luuuuu/c285c2…
Luuu tweet media
English
21
22
81
5.7K
yipclouds jedi me-retweet
Sandy.agi; gm.Brave
Sandy.agi; gm.Brave@sandy_carter·
I'm learning about ZebraCorns! ZebraCorns is a fun, hybrid term that typically refers to: 1/ A Blend of “Zebra” and “Unicorn” Startups Unicorns are privately held startups valued at over $1 billion. 2/ Zebras are companies that are both profitable and purposeful, prioritizing sustainability and impact over rapid, venture-backed growth. 3/ ZebraCorns combine the best of both: mission-driven, socially conscious businesses that also achieve high valuations. On @AjeetK Spaces today on Women in Tech ... @yipclouds
Sandy.agi; gm.Brave tweet media
English
4
2
14
464
yipclouds jedi me-retweet
Women in Polkadot
Women in Polkadot@WomenInPolkadot·
Dear ladies, we are happy to announce that we are planning our next meetup during the @Web3summit in Berlin. We have prepared a cool program, and we are expecting you! ♥️ Get your ticket below: lu.ma/os3131ih
English
5
11
29
1.2K
Solene
Solene@_SDAV·
gm from Cannes 🇫🇷 How many hours will we sleep this week?
Solene tweet mediaSolene tweet mediaSolene tweet media
English
19
0
133
2.9K
Margus Tsahkna
Margus Tsahkna@Tsahkna·
During the visit of the Vietnamese Prime Minister & his delegation to Estonia, we signed two Memorandums of Understanding: one on cooperation between our foreign ministries, & another on digital transformation & the digital economy. 🇪🇪🇻🇳 @MOFAVietNam
Margus Tsahkna tweet mediaMargus Tsahkna tweet mediaMargus Tsahkna tweet mediaMargus Tsahkna tweet media
Margus Tsahkna@Tsahkna

A great pleasure to welcome Vietnamese FM @FMBuiThanhSon in Tallinn today. We discussed global security & 🇪🇪🇻🇳 economic cooperation. I stressed that security in Europe & Asia is interlinked—& that Russia, violating international law, must not be allowed to succeed.

English
2
1
9
1K
yipclouds jedi me-retweet
Women in Polkadot
Women in Polkadot@WomenInPolkadot·
Wondering what the vibes were like at our last meetup? Ask @joinwebzero how they made it real, and @CryptoGirlsClub if they had fun. 😉 😏
English
9
23
96
14K
yipclouds jedi me-retweet
Women in Polkadot
Women in Polkadot@WomenInPolkadot·
We are preparing something cool for you in Toronto! 🇨🇦 Hint: 🩷🚌
English
3
3
27
788
yipclouds jedi me-retweet
Ema Lovšin
Ema Lovšin@ema_lovsin·
Met many amazing builders at ChangeNOW, focused on sustainable & impactful solutions. Also great connecting with the Yip Thy-Diep Ta from @Polkadot community! Invited by @Microsoft
Ema Lovšin tweet mediaEma Lovšin tweet media
English
7
15
51
1K
Superteam Germany
Superteam Germany@SuperteamDE·
Co-working, but make it Ghibli style. 😎🔥
Superteam Germany tweet media
English
10
6
79
2.8K
yipclouds jedi me-retweet
Women in Polkadot
Women in Polkadot@WomenInPolkadot·
To all incredible women, not only in @Polkadot. Build, grow, innovate, and continue to inspire us. 🪷
Women in Polkadot tweet media
English
0
2
14
241
yipclouds jedi me-retweet
Women in Polkadot
Women in Polkadot@WomenInPolkadot·
Looks like this is the last sticker of Women of Polkadot that we have 👀 Print more or go for a new design?
Women in Polkadot tweet media
English
3
5
21
556
yipclouds jedi me-retweet
Andrew Ng
Andrew Ng@AndrewYNg·
The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. The business of training foundation models and selling API access is tough. Many companies in this area are still looking for a path to recouping the massive cost of model training. Sequoia’s article “AI’s $600B Question” lays out the challenge well (but, to be clear, I think the foundation model companies are doing great work, and I hope they succeed). In contrast, building applications on top of foundation models presents many great business opportunities. Now that others have spent billions training such models, you can access these models for mere dollars to build customer service chatbots, email summarizers, AI doctors, legal document assistants, and much more. Scaling up isn’t the only path to AI progress. There’s been a lot of hype around scaling up models as a way to drive progress. To be fair, I was an early proponent of scaling up models. A number of companies raised billions of dollars by generating buzz around the narrative that, with more capital, they could (i) scale up and (ii) predictably drive improvements. Consequently, there has been a huge focus on scaling up, as opposed to a more nuanced view that gives due attention to the many different ways we can make progress. Driven in part by the U.S. AI chip embargo, the DeepSeek team had to innovate on many optimizations to run on less-capable H800 GPUs rather than H100s, leading ultimately to a model trained (omitting research costs) for under $6M of compute. It remains to be seen if this will actually reduce demand for compute. Sometimes making each unit of a good cheaper can result in more dollars in total going to buy that good. I think the demand for intelligence and compute has practically no ceiling over the long term, so I remain bullish that humanity will use more intelligence even as it gets cheaper. I saw many different interpretations of DeepSeek’s progress here in X, as if it was a Rorschach test that allowed many people to project their own meaning onto it. I think DeepSeek-R1 has geopolitical implications that are yet to be worked out. And it’s also great for AI application builders. My team has already been brainstorming ideas that are newly possible only because we have easy access to an open advanced reasoning model. This continues to be a great time to build! [Original text: deeplearning.ai/the-batch/issu… ]
English
282
1K
4.3K
616.4K
Charles Hoskinson
Charles Hoskinson@IOHK_Charles·
Here's a thought, if I put a million dollar grant for travel and per diem on the table, do you think we could get all the Cardano builders to one place to showcase our ecosystem? What would be the best place to do that?
English
709
376
3.5K
200.9K