Ought

268 posts

Ought banner
Ought

Ought

@oughtinc

Automate & scale open-ended reasoning Building @elicitorg, an AI research assistant Demos https://t.co/8UGlbJ1eVE Jobs & product roadmap https://t.co/oDFh35ACEG

San Francisco, CA Katılım Nisan 2016
18 Takip Edilen2.3K Takipçiler
Ought retweetledi
Raghav Agrawal
Raghav Agrawal@impactology·
Folks at @oughtinc are amazing man. Think its one of the few that are focusing more on how to innovate in AI via interface paradigms beyond a chatbot interface. A Library and Tutorial for Factored Cognition with Language Models | Ought ought.org/updates/2022-1…
Raghav Agrawal tweet media
English
1
1
12
1.1K
Ought retweetledi
Andreas Stuhlmüller
Andreas Stuhlmüller@stuhlmueller·
The "Essay competition on the Automation of Wisdom and Philosophy" is now live! $25k in prizes Lots of great questions in the post: Automation of wisdom - What is the nature of the sort of good thinking we want to be able to automate? How can we distinguish the type of thinking it’s important to automate well and early from types of thinking where that’s less important? - What are the key features or components of this good thinking? - How do we come to recognise new ones? - What are traps in thinking that is smart but not wise? - How can this be identified in automatable ways? - How could we build metrics for any of these things? Automation of philosophy - What types of philosophy are language models well-equipped to produce, and what do they struggle with? - What would it look like to develop a “science of philosophy”, testing models’ abilities to think through new questions, with ground truth held back, and seeing empirically what is effective? - What have the trend lines for automating philosophy looked like, compared to other tasks performed by language models? - What types of training/finetuning/prompting/scaffolding help with the automation of wisdom/philosophy? - How much do they help, especially compared to how much they help other types of reasoning? Thinking ahead - Considering the research agenda that will (presumably) eventually be needed to automate high quality wisdom/philosophy: - Which parts of the agenda can we expect to automate in a timely fashion? - What is the core that we will need humans to address? - What do we expect the thorny sticking points to be? - Why may or may not this problem be solved “by default”? (from a technical standpoint) - Can we tell concrete stories or vignettes in which the automation of wisdom/philosophy is/isn’t important, to triangulate our understanding of what matters? - What preparatory research could provide the best groundwork for humanity to automate high-quality wisdom/philosophy before it is necessary? - What projects today or in the near future would be valuable to undertake? Ecosystems - If the world were devoting serious attention to this, what would that look like? - What incentives on institutional actors could push work onto related but less important questions; vice-versa what could help ensure that work remained well-targeted? - What are the natural institutional homes for this research in the short term? - Academia? Nonprofits? Frontier AI labs? Elsewhere in industry? - What might be needed (proofs, audits, track record?) to enable humans (decision-makers, voters) and human institutions to correctly trust wise advice from AI systems? - How could we lay the groundwork for this? - Ideas for catalysing/sustaining this field? - Why may or may not this problem be solved “by default”? (from a social standpoint)
Andreas Stuhlmüller tweet media
English
4
12
61
17.3K
Ought retweetledi
Ought retweetledi
Charlie George
Charlie George@__Charlie_G·
1/ Can large language models detect and correct their own hallucinations when summarizing academic papers? In our new paper, we explore a new method we call factored verification to help answer this question. Blog: blog.elicit.com/factored-verif…
Charlie George tweet media
English
2
23
128
28.2K
Ought retweetledi
Andreas Stuhlmüller
Andreas Stuhlmüller@stuhlmueller·
People often ask - how does Elicit relate to AI Safety? Here's my answer In brief, the two main impacts of Elicit on AI Safety are improving epistemics and pioneering process supervision. blog.elicit.com/ai-safety/
English
1
4
15
1.8K
Ought retweetledi
Andreas Stuhlmüller
Andreas Stuhlmüller@stuhlmueller·
The next chapter of Elicit begins
Elicit@elicitorg

1/ Announcing our spinoff from @oughtinc into a public benefit corporation, our $9 million seed round, and a much more powerful Elicit! This new Elicit takes the components of the popular literature review workflow and extends them to automate more research workflows.

English
3
5
60
9.8K
Ought retweetledi
noahdgoodman
noahdgoodman@noahdgoodman·
My very first phd student, @stuhlmueller, founded @oughtinc after leaving CoCoLab. Ought Gad done amazing work as a nonprofit lab, that helped me see the power of LLMs. I’m excited for their next chapter as @elicitorg!! (And in a new role for me I’m an “angel”)
Elicit@elicitorg

1/ Announcing our spinoff from @oughtinc into a public benefit corporation, our $9 million seed round, and a much more powerful Elicit! This new Elicit takes the components of the popular literature review workflow and extends them to automate more research workflows.

English
0
4
44
14.4K
Ought retweetledi
Panda
Panda@VivaLaPanda·
@elicitorg aims to accelerate and augment human reasoning. To that end, we've started by trying to make high quality literature review go from a rarity to a commodity. Any researcher should be able to look at a new problem area, and get up to speed in days instead of weeks
Panda tweet media
English
1
5
13
1K
Ought retweetledi
Panda
Panda@VivaLaPanda·
Working to get here has been an amazing journey! Many early mornings and late nights spent making a tool that intends to be like the green revolution for the tree of knowledge A little thread on my thinking on Elicit as a product:
Elicit@elicitorg

1/ Announcing our spinoff from @oughtinc into a public benefit corporation, our $9 million seed round, and a much more powerful Elicit! This new Elicit takes the components of the popular literature review workflow and extends them to automate more research workflows.

English
4
6
36
6.1K
Ought retweetledi
Elicit
Elicit@elicitorg·
7/ There is so much left to do! Help us build intuitive and general interfaces that can run language models at high accuracy and superhuman scale to automate important research - elicit.com/careers
Elicit tweet media
English
1
2
14
1.9K
Ought retweetledi
Elicit
Elicit@elicitorg·
2/ You can now upload up to 100 of your own papers to extract data from. Great for automating the screening or extraction steps of systematic reviews and meta-analyses.
English
1
2
25
2.8K
Ought retweetledi
Elicit
Elicit@elicitorg·
1/ Announcing our spinoff from @oughtinc into a public benefit corporation, our $9 million seed round, and a much more powerful Elicit! This new Elicit takes the components of the popular literature review workflow and extends them to automate more research workflows.
Elicit tweet media
English
14
64
307
165K
Ought retweetledi
Panda
Panda@VivaLaPanda·
Today feels like a really cathartic end to a hectic two weeks. elicit.com is now fully out of beta, and I spent a bunch of time talking about long term plans and ways to improve how we work
English
2
3
10
1.2K
Ought retweetledi
Panda
Panda@VivaLaPanda·
So glad to work here, so much love to everyone at @oughtinc ♥️♥️♥️
English
0
2
5
705