Exploring Real-World LLM Solutions with Solar - Webinar Summary
2024/08/20 | Sungmin Park
Nurturing AI and work-ready talents with Solar LLM
Have you ever wondered how we can effectively harness the potential of Large Language Models (LLMs) in our daily lives and business environment? Keeping up with the rapidly changing era of LLMs, Upstage has been conducting a “Full-stack LLM project course with Solar” for college students to nurturing AI and work-ready talents. In Korea, courses began with students from Seoul National University in May, followed by those from KAIST in July. During these courses, students had opportunities to network with professionals, form teams, and participate in projects to develop LLM applications utilizing Solar LLM.
Out of these efforts, six outstanding projects were chosen to be showcased in an Upstage online webinar. It’s a great chance to share various ideas for making LLM applications useful to more people.
In this blog post, we'll dive into the six teams' ideas presented at the "Exploring Real-World LLM Solutions with Solar" webinar and share a variety of use cases that bring LLMs into real-world applications.
Exploring Solar LLM Use Cases
1. 🎥 YouTube Comment Replier for UpStage: YouskUp
“YouskUp" is an LLM application designed to assist with replying to YouTube comments, developed collaboratively by students from Soongsil University's HUMANE Lab and an employee from the company "Weknew”.
Project Goal:
Develop an LLM model that automatically replies to YouTube channel video comments on Upstage's YouTube channel.
Utilize Upstage's Solar API to generate accurate and relevant replies.
Improve comment management efficiency and decrease the strain on YouTube channel administrators.
Enhance channel credibility by providing more reliable and useful answers through RAG(Retrieval Augmented Generation) technology.
Project Process:
Using RAG to vectorize and embed Upstage-related materials (press releases, blogs, YouTube videos) for generating accurate and useful replies automatically.
Upstage Solar-1-mini-chat is utilized as an LLM.
The YouTube comment replier developed using Solar API can be implemented across various YouTube channels run by individuals or businesses and serve as an automation system for channel management.
2. 🖼️ Enhancing the viewing experience of an exhibition, "Docent AI"
The second use case is the "Docent AI" service, created by a team of AI education course students from Upstage and a student from Seoul National University.
Project Goal:
Develop an LLM-based service to enhance users' art exhibit experiences.
Provide in-depth commentary on artworks.
Recommend similar artworks based on the current artwork being viewed using a similarity search with Solar Embedding models.
Provide additional information if the LLM struggles with certain topics by using the Solar LLM to extract keywords and conduct internet searches.
Enable users to engage in conversation and share their thoughts about the artwork with Docent AI.
Create a platform for users to easily share their experiences on social media through a dialog-based exhibition experience archiving system.
Project Process:
To serve a wide range of art exhibitions, gather and preprocess data for a total of 136 artworks showcased in 5 exhibitions at the National Museum of Modern and Contemporary Art.
Collect data such as artist, artwork title, creation year, and artwork description.
Separate descriptions of artists and descriptions of artworks within the artwork description for accurate searches.
3. 🩺 Symptom Survey Voice Chatbot, "WHENILL.ai"
Have you ever experienced sudden illness and been uncertain about which hospital or emergency room to visit? Here's "WHENILL.ai", a voice chatbot designed to help you with your medical questions. This project is the result of “UNDERMiLLi's CEO Hojong Lee's participation in Upstage's LLM project course. The project's goals and process are as follows:
Project Goal:
Develop a voice chatbot that helps first-time patients fill out a questionnaire about their basic information, symptoms, and medical history.
Eases the burden of medical information asymmetry and long waiting times for patients.
Improves accuracy and efficiency for medical professionals.
Project Process:
Utilize STT, LLM, and TTS for voice chatbot interaction.
Employ "Zero-Shot Learning" to understand medical terms and context.
Generate questionnaires in a logical order using a system similar to RAG (Retrieval Augmented Generation).
4. 📑 Paper Reading Assistance Service for Degree Program Students
This idea was created by a collaborative team of students from KAIST and POSTECH, along with an employee from “VAIV” company. Its purpose is to help individuals read and comprehend research papers more easily.
Project Goal:
Creating a service optimized for reading research papers.
To read the latest research papers, some background knowledge is needed, so it is necessary to upload and ask questions about multiple papers at once.
Provide an editing function so that you can read and annotate the paper.
The answer generated by the LLM should provide sources so that users can verify the answer themselves.
Project Process and Results:
Provided a RAG-based multi-PDF query
Converted the PDF files into HTML and provided them, providing an editor function
Provided a highlight function to mark important contents
Provided a real-time translation function
Upstage's Groundedness Check API provided answers more reliably than SOTA models.
Answer with cited references
5. 📧 Email Assistance Service, "Smart Email”
Are you tired of sifting through piles of emails to find the information you need? Introducing "Smart E-mail", a service created by students from KAIST's Department of Computing. It uses RAG (Retrieval Augmented Generation) technology to help users organize and generate responses from the emails they've received by interacting with a chatbot.
Key technologies used in the service:
Embedding: Solar embedding-1-Large
Handles multiple languages
LLM: Solar-mini-chat
Groundedness Check: Solar-1-mini-groundedness-check
OCR: Upstage Document OCR
GraphDB: Neo4j
Organizes metadata into a visual Knowledge Graph for easy viewing
6. 🕹️ LRPG(LLM RPG): A Role-Playing Game Where Users Create and Play in Their Own Worlds
Have you heard of TRPG (Tabletop Role-Playing Game)? TRPG, featured in the popular Netflix series Stranger Things, is a board game where participants take on the roles of their characters and create a story together, with a Game Master guiding the narrative.
However, both TRPG and CRPG (Computer Role-Playing Game) come with limitations. TRPG requires participants to act out their characters and relies on an experienced Game Master, while CRPG, although playable solo, is restricted by predetermined choices from game developers. To address these issues, a team of KAIST students and developers has designed LRPG (LLM RPG), a solution that allows users to create and play in their own worlds with an AI Game Master.
Project Goal:
Allows users to play TRPG by themselves, immersed in a world of their choice.
Utilizes Solar Mini to generate the desired story for the user.
Empowers the LLM to act as the Game Master, enabling users to enjoy a TRPG-like CRPG experience.
Wrap-up the LLM Project Course & Webinar
Q: What were your thoughts on participating in Upstage's LLM Project Course and your experience using Solar LLM?
It seems that the response time and generation speed of the models is very fast compared to other APIs. I felt that it was good for use in services where speed is important.
The model's ability to follow user instructions was impressive. Other large models do not perform well in this area, but Solar generally produces responses that adhere to the instructions given, making it convenient to use.
Solar's Korean embedding model appears to have a higher performance compared to other commercial services.
Although not used in our service, I think features like Layout Analysis, OCR, and Groundedness Check could be useful in future projects.
After using Solar models this time, I was genuinely impressed by the outstanding performance of the embedding and LLM. If you're considering introducing an LLM that supports the Korean language, I highly recommend examining Solar models for their cost-effective performance.
Solar API seamlessly combined with Langchain, making the development process smooth.
The Layout Analysis API and Groundedness Checker API were extremely helpful in converting PDFs to HTML.
Solar's high-quality APIs at affordable prices are highly recommended for building various LLM services.
We recommend Solar for those who are developing services that involve uploading information and retrieving answers from that information, like our team.
Utilizing Upstage's Solar API was a revelation in terms of cost-effectiveness. Analyzing multiple PDFs dozens of times cost less than a dollar in total, which is why I would highly recommend it to those looking for an affordable solution for small LLMs. I believe early-stage startups with limited budgets like ours can greatly benefit from this resource.
Having tried the Solar models this time, I was deeply impressed by the exceptional performance of the embeddings and LLM. When considering introducing a Korean-supporting LLM, I highly recommend taking a look at the Solar models for their cost-effective performance.
Solar LLM, Embedding, and other models are super user-friendly. I am also looking forward to the future advancements in OCR services. I would like to recommend the use of Solar LLM, Embedding, and other models to everyone involved in LLM utilization research.
[ → Try Solar API]
Check out the webinar replay for all the insightful details!
We've explored a variety of use cases for Upstage's Solar LLM, highlighting the potential for LLMs to be applied in a multitude of industries. The webinar was especially meaningful as it featured some of the course graduates from "Boostcamp AI Tech" and "Upstage AI Lab," both of which are being co-created by Upstage.
As we envision the growth and expansion of the vibrant LLM ecosystem fostered by Upstage, we look forward the next making AI beneficial use cases with Solar LLM.
For more detailed insights into the real-world LLM use cases, watch the video replay using the link below.