[March 24] Interesting Things I Learnt This Week

 1. Ultrasound becoming Utra small - How MEMS technology has miniaturized ultrasound imaging. It discusses the limitations of traditional ultrasound machines. These machines are bulky and require multiple probes. MEMS technology allows for the creation of a single probe that can image the entire body. This probe is small enough to fit in a lab coat pocket.  The article also details the technical aspects of MEMS transducers.

My Take: Any and all technology is bound to get cheap and miniaturized over time with progress in science and technology. Utrasound is something which can be thought of a speaker and a mic in very layman terms. These are bound to get smaller over time and be more effective. But like any other tool, this can be misused and the reason for skewed male/female ratio in India can also be attributed to ultrasound. My worry is if this tech is available more widely and portable, it will further skew the gender ratios in India(and probably a lot of other countries). This particular tech has to be introduced with great care and all the different social impacts be studied carefully so that proper safeguards are introduced before it becomes widely available. 

 

2. Reverse Engineering Perplexity -  Details about how perplexity is probably working. Not a definitive one but a very good speculation. It seems like perplexity basically summarizes the content from the top 5-10 results of google search. If you search for the exact same thing on google and perplexity and compare the sources, they match 1:1. 

My Take: This is a very interesting take. Some others have speculated that they used Bing instead of Google. We can probably build a browser extension which can do this for any search engine. You don't really need billions of dollars except for the indexing those 4 links. I may attempt it sometime, will let you know.


3. OpenAI GPT-4 vs Groq Mixtral-8x7b - This is an article about comparing OpenAI GPT-4 and Groq Mixtral-8x7b. It discusses parsing speed and accuracy. OpenAI GPT-4 is more accurate but slower. Groq Mixtral-8x7b is faster but makes some mistakes. The author thinks Mixtral-8x7b could be improved with better prompts.

My Take: I think this article is more worrying for NVIDIA than its for OpenAI. Their hardware specialty got a very good challenge from Groq LPU. Models will eventually reach the statis like state which the databases have and with so many open source models out there, it will be a challenge to keep these closed sources models relevant especially if you have to deploy them on laptops, mobile phones etc. Groq is very promising and I hope they can break into the hardware world. Then there is TPUs from Google which have remained deployed only in GCP. I hope Groq's LPUs does not end up like that.

 

Comments

Popular posts from this blog

My learnings at Google

Top reasons why Tata Elxsi should be blacklisted

[14th April 2024] Interesting Things I Learnt This Week