prototechno

174.2K posts

prototechno banner
prototechno

prototechno

@prototechno

QuantumComputer/ROS/DataScience/DeepLearning/MachineLearning/blockchain/HoloLens/Martialarts/GenerativeArt/Analog Modular synthesizer/Yoga/Akiba Geek

iPhone: 35.011582,135.759636 Katılım Ocak 2009
4.9K Takip Edilen4K Takipçiler
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
例えばこの記事はChatGPTもOpenAIも出て来ないが、使い方次第ではChatGPTにこの先1円も払う必要がなくなる技術。しかもGoogleも困らない。 「Googleがピンチに!」とか書くならAzureやAWSとかpaperspaceとか産総研のABCIとか使って言って欲しいし
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games@o_ob

今のところ日本語LLMで最強と思われるカップリングがColabで錬成可能である事が示されている…! Google Colab で Rinna-3.6B のLoRAファインチューニングを試す|npaka @npaka123 #note note.com/npaka/n/nc387b…

日本語
1
1
4
2.5K
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
ここで「本当のホログラムディスプレイ」の作り方が解説されていたので紹介しますね まずblenderで3次元多視点画像を撮影します bpyスクリプトが尊い これをフーリエ変換面にして フォトレジストします
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games tweet media白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games tweet media白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games tweet media
日本語
1
7
6
2.7K
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
長文大型モデルのためのブロックワイズ型パラレルトランスフォーマー BPTと呼ばれる長文LLMモデル構築手法の提案。さらに上限突破していくのか…。
Hao Liu@haoliuhl

1/ Excited to share our new paper with @pabbeel on long context models! 📚✍️ Check it out here: arxiv.org/abs/2305.19370 Training 7B models with over 130K or 13B models with over 64K context windows on just 8 A100 GPUs! 😮🖥️ Curious how we did it?

日本語
0
3
10
1.9K
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
CVPRでソーシャルメディア禁止が投票によって問われている。 Jon Barron「ソーシャルメディアを禁止することで、他のものにそのロールをシフトさせてしまう望まない効果を生むだけだ」 そうだそうだ!
Jon Barron@jon_barron

I will be voting to repeal CVPR's social media ban. I am sympathetic to the rationale for enacting the ban, but it has not had the desired effect of diminishing social media's role --- it just shifted the balance of power away from authors and onto others.

日本語
1
2
5
3.6K
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
AppleVisionPro、実はユーザーの脳からバイオフィードバックを行える目を介した手術要らずのBMI。 クリック前の瞳孔を観察して何を期待しているか?認知状態を推測するための他のトリックとして、ユーザーが知覚できないような視覚や音を素早く点滅させ、それに対する反応を測定するなども。 特許によって明らかになっているところだけですが、開発に関わった研究者のツイートが興味深い。
Sterling Crispin 🕊️@sterlingcrispin

I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort. I’m proud and relieved that it’s finally announced. I’ve been working on AR and VR for ten years, and in many ways, this is a culmination of the whole industry into a single product. I’m thankful I helped make it real, and I’m open to consulting and taking calls if you’re looking to enter the space or refine your strategy. The work I did supported the foundational development of Vision Pro, the mindfulness experiences, ▇▇▇▇▇▇ products, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021, so I’m excited to experience how the last two years brought everything together. I’m really curious what made the cut and what will be released later on. Specifically, I’m proud of contributing to the initial vision, strategy and direction of the ▇▇▇▇▇▇ program for Vision Pro. The work I did on a small team helped green light that product category, and I think it could have significant global impact one day. The large majority of work I did at Apple is under NDA, and was spread across a wide range of topics and approaches. But a few things have become public through patents which I can cite and paraphrase below. Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences. So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc. There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user's brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day. Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it. Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background. All of these details are publicly available in patents, and were carefully written to not leak anything. There was a ton of other stuff I was involved with, and hopefully more of it will see the light of day eventually. A lot of people have waited a long time for this product. But it’s still one step forward on the road to VR. And it’s going to take until the end of this decade for the industry to fully catch up to the grand vision for this tech. Again, I’m open to consulting work and taking calls if your business is looking to enter the space or refine your strategy. Mostly, I’m proud and relieved this has finally been announced. It’s been over five years since I started working on this, and I spent a significant portion of my life on it, as did an army of other designers and engineers. I hope the whole is greater than the sum of the parts and Vision Pro blows your mind.

日本語
2
358
855
386.8K
prototechno retweetledi
やのせん@AI/VR/メタバース教育
Apple Vision Pro Developer Labsで、開発者はVision Proを使ったテストが可能になるようですね。東京にも開設されるそうです。希望者が殺到して大変なことになるかも、、。 >Apple Vision Pro Developer Labs to Open This Summer, Locations Here (roadtovr.com)
やのせん@AI/VR/メタバース教育 tweet media
日本語
0
49
108
17.7K
prototechno retweetledi
三珠さくまる🤹Vtuber
三珠さくまる🤹Vtuber@MitamaSakumaru·
ペストマスクパフォーマーとバーチャルパフォーマーの次元を超えたコラボをとらえた貴重な映像。
日本語
11
299
1.2K
73.6K
prototechno retweetledi
フガクラ
フガクラ@fugakura·
怖ぇー、chat gpt使ってたら「視覴」という見慣れない言葉を吐き出したので検索したところ、ヒットするサイトがほとんど直近日付のAI出力された文章だった これっていわばAIのハルシネーションが生み出した存在しない単語なのでは…?
フガクラ tweet media
日本語
59
11K
47.2K
17.6M
prototechno retweetledi
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games
"Faster sorting algorithms discovered using deep reinforcement learning" AlphaDev によって発見された根本的に異なるアルゴリズム。 いやあ、これふつうにコンパイラとかSIMD最適化とかで実験してるじゃんすご・・・ #AlphaDev
白井暁彦 - Dr.(Shirai)Hakase - Supercell/AICU Games tweet media
日本語
0
1
7
974