Browse applications built on OpenAI OpenAI gym technology. Explore PoC and MVP applications created by our community and discover innovative use cases for OpenAI OpenAI gym technology.
Adapt-a-RAG is an adaptable retrieval augmented application. It creates synthetic data to optimize the prompts of the Adapt-a-RAG application. The application recompiles itself every run in a unique and adapted way to the user query.
Data Chat: Upload CSV, analyze with text. Simplifying data analysis through user-friendly CSV upload and natural language processing.
Discover new perspectives with Ideastorm, an AI-powered chat platform that generates unique discussions between two customizable AI agents for brainstorming and idea generation.
Buggy is a WhatsApp bot that generates images, checks network speed and chats with users using OpenAI GPT-3 language model.
Speech analysis SaaS product. Uses speech-to-text to identify areas for improvement in recorded audio. Gives real-time feedback and suggestions to reduce stuttering and enhance communication skills. Aimed at boosting confidence in communication.
Introducing MyQuiz.AI - a trivia game that uses AI to generate custom questions based on your interests and skills. Simply use your voice to start a fun and challenging quiz journey.
This project aims to break down language barriers and empower the deaf and hard-of-hearing community. Leveraging the power of OpenAI and Whisper, we are developing an innovative solution that can translate speech into sign language in real-time.
It can deliver near human like conversational experience.
Our application encrypts speech input messages using OpenAI Whisper and multi-layer encryption codes generated by GPT3. Customizable encryption algorithms and keys, user-friendly interface, and easy code retrieval make it ideal for secure messaging.
EasyQuery is a platform that unleashes the intuitive power of Web3 data query with LLM. With its ability to transfer human language into SQL queries and to extract data insights, it is the ultimate tool for unlocking the full potential of Web3 data.
Shop smarter with FUDL - the AI-powered app that simplifies grocery shopping. Get personalized recommendations, find discounts, and save up to 50% on your bill
Sherlock's Pheonix is a Deep tech solution leveraging Generative AI models plus computer vision plus internet to find your loved ones/missing persons.
NLP/ML app fights job desc bias and suggests inclusive language (neutral pronouns, skill emphasis) with real-time feedback & diverse dataset training. Promotes diversity, and expands candidate pool.
Hyperbot 🤖 assists with coding queries, generates art, provides real-time updates on current affairs and weather forecasts, composes tweets, LinkedIn posts, emails, and plays music of your choice or displays your favorite YouTube video.
NuruNet is a WhatsApp chatbot that provides educational resources and personalized learning support to students in Africa, using AI-powered responses from OpenAI.
An artificial intelligence podcast that is written by ChatGPT, GPT-3.5, Open-AI davinci, and human assistance. The art is generated by Stable Diffusion, Open Journey, and Dall-E 2. It is read by Natural Readers text-to-speech and Lifelike Speech Synthesis
Your personalized chatbot companion, trained exclusively on your chosen content, with secure and private access – tailored conversations made just for you!
RL Introductory Hackathon for envs Cartpole, Walker , Lunar Lander
3 environments, everything is good
Using the stable_baselines3 library, we tried to solve the problems proposed in the challenge. We used a Proximal Policy Optimization (PPO) Model. The Policy we used is a standard MLP. We tried to change the number of iteration to achieve a better performance.
We have completed 2 challenges. The first one (cartpole) was completed using our own code, we implemented Deep Q Learning. For the second one (Lunar Lander) we used stable_baseline library.
Applied reinforcement learning for agent to play these 3 games: Cartpole, LunarLander, BiPedal Walker. We used the basic model of Environment -> State -> Agent -> Action to train our agent. We reward the agent for achieving an outcome that we want, while penalizing the agent for doing otherwise. After many iterations, our agents learns to clear the games.
Used A2C and DQN for Lunar Lander DQN for Cartpole TQC for Bipedal Walker