Tide is an AI chatbot for the psychology space. It was developed by me and Eva at Dunoso Studio and released a couple of months ago. The project was born out of our curiosity for tweaking and hosting an AI application that could benefit a niche of users in their day-to-day life. Family Constellations, a therapeutic method popularized in recent years, appeared to be a good candidate for our application.
The problem
In a family constellation, you investigate personal questions and patterns within a system with the help of a systemic facilitator.
The work of a good facilitator in a Family Constellation is invaluable and irreplaceable. There are different forms that a session can be led, including online. However, when we spoke to friends and acquaintances about the practice, their biggest barriers to trying the practice were:
- Availability - finding a facilitator and scheduling a session.
- Price - not affordable for everyone.
- Privacy - people are not always comfortable sharing their issues in a group setting.
The solution
We never intended to build a replacement for a "real" session, but we couldn't help but notice that similar aspects of a Family Constellation session could be loosely reproduced by an AI coach. This added the benefits of being always available, being low cost, and being private.
To validate our assumption of people being interested in such an offering, our first step was to create a Custom GPT and allow people to try it out directly on OpenAI's platform. This pilot was well received by friends and some Facebook communities where we shared it. Useful feedback was also collected, which was helpful later.
Technical breakdown
Full Stack (Next.js / v0 / TailwindCSS)
We wanted to build and release the application as quickly as possible. For that, I chose to extend the existing Next.js AI Chatbot Template, which provided good defaults for a Next.js project.
Eva drafted the beautiful landing page in v0, Vercel’s genAI platform for generating user interfaces. I was impressed, to say the least, by how quickly she came up with it. Since everything was built on top of TailwindCSS, I could import the page as-is into the main branch of the project.
Hosting & CI/CD (Vercel & Github)
Hosting Next.js applications in Vercel is as native as it can get. In general, I would be mindful to host projects there that would scale, as I know their prices can ramp up quite fast, but their free tier is sufficient for our case.
Linking my Github repository with Vercel also allowed me to connect my commits to releases, which gave me a lightweight Continuous Deployment pipeline for "free".
AI provider (ChatGPT-4o)
In our first Custom GPT pilot, the only available model was ChatGPT-4o. For our application, I was eager to try different models, so I added OpenRouter as an LLM proxy where I could switch between them. We tested DeepSeek V3, which has a better token price, but after noticing some hallucinations on test conversations, we ultimately chose to stay with ChatGPT-4o as it delivered more consistent results.
For development, I have set up Llama3.2 8B with Ollama locally to prevent wasting valuable tokens, but the quality is not good enough for a real use case.
Database (PostgreSQL / Supabase)
For the database, I opted for PostgreSQL but for the first time using Supabase in production. Vercel facilitates the integration between a Supabase project and a Vercel one. These are only shared environment variables under the hood, a nice addition but not rocket science.
To manage the migrations and data fetching, I chose the Drizzle ORM. It was my first time using this library, and I was rather impressed; migrations are easy to generate, and writing queries are quite close to the SQL semantics.
Auth (Better Auth)
Choosing the authentication solution was not straightforward. Supabase offers one that seemed like the best choice at first, but the reviews on the internet were rather negative, especially regarding the documentation.
Better Auth, a new open-source offering in the authentication space, was well praised, and I'm glad I chose them. The library has a rich ecosystem with well-documented integrations for Next.js and Postgres (Drizzle), which were valuable for this project.
However, I encountered a single issue with Better Auth, which I even reported, where linking an anonymous account after registering didn't work as according to the documentation.
Tracking (Umami)
I know the word tracking has a bad connotation, but in this situation, we didn't want a behemoth solution like Google Analytics or Mixpanel ― despite enjoying the latter. We intended only to track the number of users and their originating locations. After considering several lightweight alternatives such as Fathom and Matomo,
I settled on Umami due to its free tier, cookie-free approach, and GDPR compliance. Umami can also be hosted locally, which might be an option for the future, but not now.
Conclusion
Currently, Tide is on a back burner as we focus on other projects. A small group of people are using it and regularly returning, which makes us happy to have a product that people find useful. We are currently keeping it free as the costs are low, and we are supported by other activities.
Potential next steps we have in mind are:
- Reaching more users by promoting it more
- Creating AI-generated visualizations of the constellations
If you need help building your own AI product, let's get in touch!