New ask Hacker News story: Ask HN: Multiple LLM work together to give higher than gpt4o performance

Ask HN: Multiple LLM work together to give higher than gpt4o performance
3 by VictorPenJust | 2 comments on Hacker News.
I am building a ChatGPT-like platform that, instead of asking one LLM a question, it will ask multiple LLMs all at the same time and create a final output by corroborate output from each mode. Model will work together to come up with the final answer and we see greatly improve performance and reduce hallucinations. Notably, on AlpacaEval 2.0, using solely open-source models, we achieved a margin of 7.6% absolute improvement from 57.5% (GPT-4 Omni) to 65.1% (Mix Model) and 65.7% using close source model. Is this something that people find to be useful? Not sure what are the use case, or is working with one LLM enough

Comments

Popular posts from this blog

New ask Hacker News story: Tell HN: Equifax free credit report dark patterns

New ask Hacker News story: Ask HN: Why can't the US government run their own social media?