Eli Brosh

30 posts

Eli Brosh

Eli Brosh

@EliBrosh

Head of AI Research at https://t.co/TjQKarfKqP, Machine learning junkie, Coffee snob

Tel-Aviv Katılım Kasım 2012
156 Takip Edilen56 Takipçiler
Eli Brosh
Eli Brosh@EliBrosh·
Happy to share our AI journey at @Wix, from pioneering GenAI tech to becoming an AI-first company, and exploring the exciting future of human-AI collaboration. Check it out! #GenAI #AI #WixEngineering
Wix Engineering@WixEng

Wix’s AI journey began with bold exploration of a fast-evolving technology - and developed into a clear, company-wide strategy. In this eye-opening conversation, @GiladBarkan, Head of the Data Science Guild, and Dr. @EliBrosh, Head of AI Research, share @Wix’s AI journey: From the early days of navigating an emerging and unfamiliar technology, to building real AI products at scale - and ultimately laying the foundation for becoming an AI-first company. They discuss the challenges of working through the unknown, the lessons learned from bringing AI into production, and the organizational mindset that made it possible. Looking ahead, they also explore the next frontier: how humans and AI assistants will work together - and what companies need to do today to stay relevant in the age of AI. Watch:

English
0
1
3
288
Eli Brosh retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.”​ Statements discouraging people from learning to code are harmful! In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment. As coding becomes easier, more people should code, not fewer! Over the past few decades, as programming has moved from assembly language to higher-level languages like C, from desktop to cloud, from raw text editors to IDEs to AI assisted coding where sometimes one barely even looks at the generated code (which some coders recently started to call vibe coding), it is getting easier with each step. I wrote previously that I see tech-savvy people coordinating AI tools to move toward being 10x professionals — individuals who have 10 times the impact of the average person in their field. I am increasingly convinced that the best way for many people to accomplish this is not to be just consumers of AI applications, but to learn enough coding to use AI-assisted coding tools effectively. One question I’m asked most often is what someone should do who is worried about job displacement by AI. My answer is: Learn about AI and take control of it, because one of the most important skills in the future will be the ability to tell a computer exactly what you want, so it can do that for you. Coding (or getting AI to code for you) is a great way to do that. When I was working on the course Generative AI for Everyone and needed to generate AI artwork for the background images, I worked with a collaborator who had studied art history and knew the language of art. He prompted Midjourney with terminology based on the historical style, palette, artist inspiration and so on — using the language of art — to get the result he wanted. I didn’t know this language, and my paltry attempts at prompting could not deliver as effective a result. Similarly, scientists, analysts, marketers, recruiters, and people of a wide range of professions who understand the language of software through their knowledge of coding can tell an LLM or an AI-enabled IDE what they want much more precisely, and get much better results. As these tools are continuing to make coding easier, this is the best time yet to learn to code, to learn the language of software, and learn to make computers do exactly what you want them to do. [Original text: deeplearning.ai/the-batch/issu… ]
English
514
2.8K
11.9K
2.1M
Eli Brosh
Eli Brosh@EliBrosh·
I had a great time talking with @Hosting_Advice about the evolution of AI—from simple assistants to proactive autonomous agents. The real challenge with AI agents isn’t just that they could replace human tasks, it’s ensuring they empower us instead. hostingadvice.com/blog/ai-agents…
English
0
0
1
43
Eli Brosh retweetledi
Wix Engineering
Wix Engineering@WixEng·
Check out @EliBrosh's new article on building #AI-powered creativity tools at @Wix, where we explore our Diffusion Layout Transformer approach for custom layouts. This case study is set to be featured at IMVC 2024 next month: wix.engineering/post/beyond-co…
Wix Engineering tweet media
English
0
1
4
490
Eli Brosh
Eli Brosh@EliBrosh·
Leveraging #AI for innovative layout generation in graphic design: my new blog post explores the Diffusion Layout Transformer approach✨ and its role in building AI-powered creativity tools at @Wix. A case study set to feature at @IMVC2024 next month! wix.engineering/post/beyond-co…
Eli Brosh tweet media
English
0
5
9
574
Eli Brosh
Eli Brosh@EliBrosh·
@AndrewYNg One practical challenge for this to occur is that the LLM inference runtime needs to be significantly faster, perhaps by an order of magnitude. Given the rapid advancements in GPU processing speeds, this gap might close within the next few years.
English
0
0
0
106
Andrew Ng
Andrew Ng@AndrewYNg·
I think AI agentic workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it. Today, we mostly use LLMs in zero-shot mode, prompting a model to generate final output token by token without revising its work. This is akin to asking someone to compose an essay from start to finish, typing straight through with no backspacing allowed, and expecting a high-quality result. Despite the difficulty, LLMs do amazingly well at this task! With an agentic workflow, however, we can ask the LLM to iterate over a document many times. For example, it might take a sequence of steps such as: - Plan an outline. - Decide what, if any, web searches are needed to gather more information. - Write a first draft. - Read over the first draft to spot unjustified arguments or extraneous information. - Revise the draft taking into account any weaknesses spotted. - And so on. This iterative process is critical for most human writers to write good text. With AI, such an iterative workflow yields much better results than writing in a single pass. Devin’s splashy demo recently received a lot of social media buzz. My team has been closely following the evolution of AI that writes code. We analyzed results from a number of research teams, focusing on an algorithm’s ability to do well on the widely used HumanEval coding benchmark. You can see our findings in the diagram below. GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%. Open source agent tools and the academic literature on agents are proliferating, making this an exciting time but also a confusing one. To help put this work into perspective, I’d like to share a framework for categorizing design patterns for building agents. My team AI Fund is successfully using these patterns in many applications, and I hope you find them useful. - Reflection: The LLM examines its own work to come up with ways to improve it. - Tool use: The LLM is given tools such as web search, code execution, or any other function to help it gather information, take action, or process data. - Planning: The LLM comes up with, and executes, a multistep plan to achieve a goal (for example, writing an outline for an essay, then doing online research, then writing a draft, and so on). - Multi-agent collaboration: More than one AI agent work together, splitting up tasks and discussing and debating ideas, to come up with better solutions than a single agent would. I’ll elaborate on these design patterns and offer suggested readings for each next week. [Original text: deeplearning.ai/the-batch/issu…]
Andrew Ng tweet media
English
204
1.2K
5.2K
838K
Eli Brosh retweetledi
Argilla
Argilla@argilla_io·
🎯 Have you missed Intent-based Prompt Calibration and how it is different from basic prompt engineering? Join our next community meetup with Elad Levi and the AutoPrompt framework. buff.ly/3TugFh1
English
2
3
6
855
Eli Brosh
Eli Brosh@EliBrosh·
4/4 Lastly, we released an open-source system based on our method, designed for modularity. It supports various production use cases, including cost-effective prompt distillation, prompt squashing (combining multiple prompts into one) and batching, and synthetic data generation.
GIF
English
0
0
0
85
Eli Brosh
Eli Brosh@EliBrosh·
3/4  We also propose a prompt optimization method for generation tasks. We demonstrate that this case is a special case of a task with an unbalanced dataset. Thus, previous methods might result in a prompt inferior to the initial one, whereas our method boosts prompt performance
Eli Brosh tweet media
English
1
0
0
57
Eli Brosh
Eli Brosh@EliBrosh·
1/4 Prompt engineering is a challenging task due to the high sensitivity of LLMs and text ambiguity.  In our recent paper, we propose a new method for optimizing prompts through iterative generation of synthetic cases and refinement based on user feedback. bit.ly/4bQhU1b
Eli Brosh tweet media
English
1
1
1
121
Eli Brosh
Eli Brosh@EliBrosh·
[2/2] Check out this fascinating example: changing a prompt from 'Is this movie contain a spoiler? Answer Yes or No' to 'Does this movie review contain a spoiler? Answer Yes or No' altered the performance of GPT-4 on a spoiler detection benchmark from 80.8 to 75.4! 🎬✍️ #gpt4
English
1
0
0
51
Eli Brosh
Eli Brosh@EliBrosh·
[1/2] Ever noticed how a small tweak in your prompt can drastically change the outcome with AI models? 🤯 Large Language Models (LLMs) like GPT-4 show incredible efficiency in natural language tasks, but they're surprisingly sensitive to how prompts are structured. #OpenAI #LLM
Eli Brosh tweet media
English
1
0
1
69
Eli Brosh
Eli Brosh@EliBrosh·
You can only minimize this LLM-driven phenomena (fact-checking, guardrails, SLMs), but can't remove it completely. For many companies, the ultimate solution could be to rely on detailed terms-of-use and a clear 'consent' button before chat begins arstechnica.com/tech-policy/20…
English
0
0
0
44