Thursday, April 23, 2026

Experience with Digital Rupee

 I finally tried the Digital Rupee. 

Well, I can hear you asking, what do you mean by Digital Rupee? Isn't most of our transactions already digital? 

Well.... Most of our transactions are digital they are not exactly and necessarily digital money

Digital Rupee is actually issued by the Reserve Bank of India.  It is actually sovereign digital cash, a claim against the RBI (with the other digital money, it is a claim on the bank or the card.) When you pay with digital rupee, (actually token; little later on that, no not the AI compute token, this is blockchain token) the settlement is instant and peer-to-peer as good as the physical money and doesn't rely on the bank and all the settlements. 

All that is fine, why do we need it? why should we care? isn't there IMPS, UPI and all that?

Well, with UPI, cards and all that, there is a need for bank. think of times, the merchant doesn't accept the UPI because there is a cost involved to accept UPI (FWIW, currently the MDR is paid by the government) (Also the reason why there is not much of innovation in UPI, again topic for some other day)

In fact, if done right and in scale, it could reduce the load on banking systems. (we could claim pride on the scale of UPI transactions but in reality it is a strain on the banking systems) In other words it would help to achieve a digital cash economy rather than a cashless economy.

Digital cash is the operating word. The money could be programmed. You can give 500  digital rupee to your driver for your car fuel and even make it specific in which outlet he could spend it. The government can give the money for specific uses and ensure it is used only for that purpose. say School fees, food and so on. You get the drift. (Think about how much money was spent on aadhaar and potential use case and efforts to prevent the leakage in DBT (direct benefit transfers) 

Not just that, you can use it offline (think about the times your UPI app not working) with a tap and pay and can sync later.  With huge amounts of digital rupee, they can be used as escrow as well. say, money will be released to builder only when certain milestones are met.  

With interoperability of UPI and with proper plumbing, it can help even in cross border transfer and international payments. 

In fact, the money would be truly digital and we can leverage the benefits. 

However, it is mostly in pilot stages (very few banks) and you need to download a separate digital rupee app of your bank, do the sim binding,(wallet is bound to the hardware to prevent cloning of the digital cash) kyc and load the money. It is fairly simple and straight forward. It is interoperable with UPI. However to use the programmable features(saying you can use this money for only this reasons) the other person should also have the digital rupee wallet with the same bank. There is also other limitations such as  one digital rupee wallet can hold maximum  of 250 tokens/denominations with a limit of 1lakh (my icici app uses 500, 200 100 rupee images to load. for e.g you can add 1000 to your wallet by clicking the 500 rupee image twice or 200 rupee image five times) In other words, the 500 rupee is one token and it is also to convey it is actually equivalent to cash. 

for this to be truly reach its potential and scale, the government needs partners with big ambitions. Think of what Google and the likes of PhonePe and PayTM did to UPI. with out these partners, UPI wouldn't have reached the scale and potential it has reached. 


Friday, April 10, 2026

Say Hello to Ask Steve

 TLDR: 

Sometime back, rather than writing a book about Steve, I ended up building a custom GPT, what would Jobs do in ChatGPT. It was bugging me, it was behind chatGPT and not with a clean UI chat interface. No more, finally, I have Ask Steve on lovable powered by Gemini at the end. 

For all the common folks, it is a chat window to ask Steve Jobs what you want and hear from him based on 100 plus hours of video transcripts, mostly his interviews and talks 100+ articles written from the days of Apple launch and definitive 13 books written on him.

For all the AI nerd bros end-to-end RAG application with a hybrid search architecture, optimizing vector embeddings (Gemini/pgvector) and keyword fallback. 

Yes, yes, I  can finally say, You Know I am something of a AI Native builder myself. well sort of ;)

Recap/Evolution

I am a big fanatic fan of Steve Jobs, I ended up writing his biography in Tamil. I couldn't help but get mad whenever Steve Jobs and bad behavior of leaders and founders were referenced in the same breath. 

I would scream aloud in the head, "this is not what he intended, what he meant and this is not what you learn from him or you are totally wrong or why use Steve as an excuse for your A behaviour"

I wanted to write a book on learnings. It stuck me in these day and age, rather than a static book, it would be good to have a dynamic, customized, tailored, unique AI

I ended up building a custom GPT on chatGPT. more on that here

When I wanted to write on how I built it and how to build customGPTs, I learnt, it could be more powerful, If I can build an end to end RAG application. 

More than that, the thing bothered me was not having a better UI as chat interface. 

So I started wondering, how do i have a good front end and a proper back end.  and the result is https://asksteve.lovable.app

How is it has been built?

I had a collection of 13 books, 100+ articles and videos around 100 plus hours. I had all of them in text and used Notebook LM to create the core brain. Basically, more like a synthesis of all of them. Ideally all of them could be chunked as well. However it might not be efficient as it would mean AI is looking at the repetitive stuff. Rather it is critical to have a high density, single perfectly synthesized chunk. In other words, it is essential  to pre-distill the data.

I have fed the core files, and source of articles and video transcripts. Using the books as such or chunking them might also lead to copy right issues. The 13 files have been converted to 3140 chunks and is stored in supabase (postgreSQL). To make this library searchable by "meaning" rather than just "words," have enabled the pgvector extension. This allows the database to store embeddings—mathematical vectors generated by the Gemini API that represent the semantic essence of each text chunk. Then used a recursive chunking strategy to break down the massive books into bite-sized pieces of roughly 1,000 characters, ensuring the AI can pinpoint specific stories or principles without getting lost in the documents

The "intelligence" of the system comes from its Hybrid Search architecture. When a user types a question, the backend doesn't just look for exact word matches; it performs a dual-track search. First, it uses Vector Search to find chunks that are conceptually related to the query (e.g., finding "craftsmanship" when you ask about "quality"). Second, it runs a Keyword Search (Full-Text Search) to catch specific names or historical terms like "NeXT" or "Xerox PARC" that might be mathematically blurred in vector space. By merging these results, we achieve a 99% retrieval precision, ensuring the response is grounded in actual facts rather than AI hallucinations.


When you hit "Send," a Supabase Edge Function acts as the traffic controller. It immediately converts your question into a vector via Gemini, queries the database for the top 10–15 most relevant chunks, and bundles them together. This "Knowledge Package", consisting of the Steve Jobs Persona instructions, the user’s question, and the retrieved primary source text, is sent to Gemini 1.5 Flash. The LLM then synthesizes a response that is blunt, direct, and "Insanely Great," streaming it back to your screen in real-time. This "Closed Loop" ensures that Steve isn't just guessing; he is effectively "reading" your curated library to answer you

what remains the same: Both across the custom GPT and the Gemini powered RAG, the data, the source and system instructions remain more or less the same. However, the way the data is processed or treated is different. Here it is a proper end to end RAG powered by Gemini. Yet it should be fun to see how they behave differently

Key Takeaways

In the world of AI, there are lot of things for some one to know. To make things worse, things evolve and change at a super fast pace. Both the quantum and the speed of change is just impossible to keep up.

What would really give the edge is, knowing what needs to be done/built, why it should be built, and how it should be built matters the most. (when I say how it means, the behavior, the output, rather than the tech stack)

If someone knows them well and if they have critical thinking and are good at prompting, they can build anything in the best possible way. I had to ask the LLMs what other approaches are there? why this? why not that? what are the pros and cons and it walks you through the trade offs and you can make a choice accordingly [For e.g the keyword vs vector search, why not to use books, even if you have copy right, at what point more data or training material doesn't move the needle much]

In fact lovable did the chunking by itself, i had to say use this method. It is not that I need to know this method, I need to know enough to ask the LLMs for it to tell me about possible methods and how to make it happen. In other words, if you are little tech savvy and if you can speak english and follow instructions, building most of things are child's play. [There are few applications, I am having a tough time to build them]

There is more than one way to skin the cat. I could have used Claude Code to build or google studio to do it end to end. I could have used ChatGPT API rather than Gemini API. To make it simpler, hey you could use AWS or GCP or Microsoft cloud but each comes with its own flavor. It is imperative to know which one would for you. Functionality, cost and so on.

More than that, the most important think is your source material, the data, how you organize it, how you train your LLM to deal with it. Again IMO, each LLM behaves differently. There is a variation in their flavors.

Building this also gives you the appreciation or make you wonder or appreciate, how difficult or challenging to would be to build something gemini or claude or chatGPT. or even a regular enterprise AI application with safeguards. (for e.g in the beginning, the feedback I got from my friend was it is mean and verbose.)





Wednesday, April 08, 2026

Who is Satoshi

 The Indian movie enthusiasts had to wait for 22 months to know the answer, "Why Bahubali killed Kattappa"

It took 16 months to unmask the Fake Steve Jobs blogger!

Crypto enthusiasts have been waiting for more than 17 years to know, " Who is Satoshi Nakamoto?" the bitcoin creator and author of the white paper on bitcoin

John Carreyrou of New York Times has written an article claiming to unmask, Satoshi and says, Adam Back, Co-Founder/CEO of blockstream.com is Satoshi

However, Adam has gone on record denying it.

If I am Satoshi, (or for anyone who is Satoshi) it would take an immense will power and effort to not boast about such power and influence for so long.

Think about it, if you are Batman or Superman, though you want to keep your identity secret, wouldn't you at least wanna tell a select few. How come folks can keep quiet, if they know you are super man or bat man!

Neverthless, read this wonderful article https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html and i also think and believe, certain secrets should stay secrets and this is one of them

Finfluencer vs AI tech Bro

 Last week, after building the REIT InvIT planner, I did share it few of my friends. Well actually, all of my friends. 

One of my friend responded saying, Nice, you have become a Finfluencer.

For a minute wasn't sure, should i be offended or feel proud. 

The whole point of it was to showcase how AI makes life easier, how it is easy to build something super fast and super easy and rather than be called as AI tech bro, me getting labeled as finfluencer was little upsetting.

Then it hit me! It was indeed a compliment

One of my mentor used to say something like this: True tech should be like magic, One should never know the intricacies and they should see and appreciate the output. (probably he was channeling his inner Steve Jobs ;))

Though the application was built using AI and leverages a lot of AI, the tech was invisible. In fact only one person asked me, "Oh you have built it, what stack you are using and how and where you are getting the data"

With the rest of the others, the conversation was about to what are these assets, how to invest, to invest or not, how much to invest and so on.

Looking back, in a way mission accomplished!

Embracing and Adopting AI

 Sometime back was talking to a senior executive from a traditional company. The talk veered towards AI and the impact/disruption due to that. During the conversation, he was wondering, how to convince stakeholders in firms that are not necessarily tech first but are one focused with accuracy, precision and reliability.  I did suggest, how they could think off an overall AI strategy, Identifying use cases with low impact low risk, clearly calling out boundaries in terms of where AI could be used versus not, rethinking their enterprise risk management with AI in focus, having sufficient guard rails, human in the loop design systems and so on.

In hindsight, I realized I should have told him the below as well

1. AI is not just GenAI or LLM. (where the repeatability and reliability could be challenges). There is lot more.

2. If the senior/exec leadership team is still worried about using AI and if they are not wondering how could they use AI and how could they leverage AI and are not proactive and taking initiatives to be AI literate, they are in for far more bigger challenges

3. The real nuanced challenges are more about data protection, privacy, security, and managing legal, risk and compliance.

Having said that, I was also reminded by what one of the tech person who is building/integrating AI features told me recently.

There are two biggest challenges in AI

People in senior level who have no context or knowledge about the particular domain prompting AI to ask for tough questions and make a critique and parroting that to show they are smart while putting down people. Think of a finance leader commenting on the choice of a tech stack!

People in senior level playing around with AI, building a pet project or POC over the weekend and thinking it would apply to production grade, large scale systems as well and making insane requests and timelines.