<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on Elliot Massen</title>
    <link>https://elliotmassen.com/posts/</link>
    <description>Recent content in Posts on Elliot Massen</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-gb</language>
    <lastBuildDate>Sun, 22 Feb 2026 00:00:00 +0000</lastBuildDate>
    
	<atom:link href="https://elliotmassen.com/posts/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>Agent UX mirrors model limitations</title>
      <link>https://elliotmassen.com/posts/agent-ux-signposts-model-limitations/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0000</pubDate>
      
      <guid>https://elliotmassen.com/posts/agent-ux-signposts-model-limitations/</guid>
      <description>Agent UX tends to signpost model limitations and is heavily shaped by technical constraints.
We stream responses token by token because inference can be slow. We accept unpredictability because varying batch size introduces nondeterminism. Latency and stochasticity shape the experience in quite significant ways.
But now inference hardware is improving, and research is pushing toward reproducible, batch invariant execution. If models became consistently sub-second and deterministic, then streaming cursors and &amp;ldquo;thinking…&amp;rdquo; animations wouldn&amp;rsquo;t have the same utility as today.</description>
    </item>
    
    <item>
      <title>AI Tools I use everyday</title>
      <link>https://elliotmassen.com/posts/ai-tools-i-use-everyday/</link>
      <pubDate>Thu, 03 Apr 2025 22:40:50 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/ai-tools-i-use-everyday/</guid>
      <description>The following is the current list of AI tools I use each day, both at work and personally.
I&amp;rsquo;ll hope to update this as time goes on.
 Claude Code: For making fixes across a large codebase (a little like an AI-powered find-replace). I have also found it useful for technical writing. Replit: The Replit Agent is great for prototyping, but becomes a bit destructive once you have a good basis.</description>
    </item>
    
    <item>
      <title>On the framing of Deepseek</title>
      <link>https://elliotmassen.com/posts/on-the-framing-of-deepseek/</link>
      <pubDate>Thu, 27 Mar 2025 21:39:08 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/on-the-framing-of-deepseek/</guid>
      <description>It seems that DeepSeek can&amp;rsquo;t be mentioned without referring to it being developed by a Chinese team 🤔
This rarely applies to the efforts of US or European teams however! How often do you hear &amp;ldquo;American ChatGPT&amp;rdquo; or &amp;ldquo;GPT-4o from the United States&amp;rdquo;? The answer is rarely, only if it adds neccessary context.
Yet DeepSeek is almost always framed in generalised national terms, as if it represents an entire country of 1.</description>
    </item>
    
    <item>
      <title>Control and Conversational Applications</title>
      <link>https://elliotmassen.com/posts/control-in-conversational-applications/</link>
      <pubDate>Thu, 20 Mar 2025 21:40:33 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/control-in-conversational-applications/</guid>
      <description>New LLM features like OpenAI&amp;rsquo;s Tools and Anthropic’s Model Context Protocol offer some quick wins for interfacing with external systems.
However, when used as the only means of interfacing, they can also introduce a major challenge: lack of ultimate control.
When an LLM is put in control of the conversational flow, the result is unpredictability. It introduces the unnecessary risk of being misaligned with user needs and business objectives.
Just because LLM&amp;rsquo;s produce natural language, it doesn&amp;rsquo;t mean they are qualified to dictate the user experience alone.</description>
    </item>
    
    <item>
      <title>Unparsed 2024</title>
      <link>https://elliotmassen.com/posts/unparsed-2024/</link>
      <pubDate>Thu, 20 Jun 2024 21:41:22 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/unparsed-2024/</guid>
      <description>I had a blast at Unparsed this week ✨️
There were so many great talks on both the design and dev tracks, as well as some really interesting projects coming out of the hackathon. If I had to pick just one highlight though, it was learning more about Small Language Models 🤖
As ever it&amp;rsquo;s a super exciting time to be working in the Conversational AI space and it was great to get to chat with others working on the same problems.</description>
    </item>
    
    <item>
      <title>One simple test for LLM bias</title>
      <link>https://elliotmassen.com/posts/one-simple-test-for-llm-bias/</link>
      <pubDate>Sun, 03 Mar 2024 21:37:32 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/one-simple-test-for-llm-bias/</guid>
      <description>One simple example of bias in large language models is to ask a question like &amp;ldquo;Who was President in 2003?&amp;rdquo;
Leading models will provide an answer that&amp;rsquo;s only relevant to the USA, without seeking a clarification nor caveating the answer. The answers give no mention to the possibility that I may be asking about the President of Ireland or Nigeria, for example. The model is operating on a biased presumption.</description>
    </item>
    
    <item>
      <title>From the Bookmarks: August 2023</title>
      <link>https://elliotmassen.com/posts/from-the-bookmarks-august-2023/</link>
      <pubDate>Thu, 31 Aug 2023 08:42:52 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/from-the-bookmarks-august-2023/</guid>
      <description>I&amp;rsquo;ve been continuing to read a lot about generative AI and large language models. Each day seems to bring new announcements. This remains exciting, and gives a LOT of food for thought.
Yet beyond the excitement, I&amp;rsquo;m keen to seek out how this technology is being used, and its challenges. This month I wanted to share articles that focus on the practicalities of utilising LLMs.
All the Hard Stuff Nobody Talks About when Building Products with LLMs &amp;ldquo;All the Hard Stuff Nobody Talks About when Building Products with LLMs&amp;rdquo; by Phillip Carter at Honeycomb was a decent, broad overview of some of the considerations and issues faced when building LLM-based features.</description>
    </item>
    
    <item>
      <title>Three new Large Language Models announced in July</title>
      <link>https://elliotmassen.com/posts/three-new-llms-july-23/</link>
      <pubDate>Mon, 31 Jul 2023 18:20:24 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/three-new-llms-july-23/</guid>
      <description>July 2023 saw the announcement of three new developments in large language models: Meta&amp;rsquo;s Llama 2, Anthropic&amp;rsquo;s Claude 2, and StackOverflow&amp;rsquo;s OverflowAI.
In this post, I&amp;rsquo;ll provide a very brief summary of each model. While it&amp;rsquo;s promising to see the space continue to expand beyond OpenAI&amp;rsquo;s offerings, many questions still exist surrounding training data ownership, reliability, and biases of these models.
Llama 2 Meta&amp;rsquo;s Llama 2 model is a general purpose large language model that was released on 18th July.</description>
    </item>
    
    <item>
      <title>From the Bookmarks: January 2023</title>
      <link>https://elliotmassen.com/posts/from-the-bookmarks-january-2023/</link>
      <pubDate>Sat, 28 Jan 2023 12:14:58 +0000</pubDate>
      
      <guid>https://elliotmassen.com/posts/from-the-bookmarks-january-2023/</guid>
      <description>This month&amp;rsquo;s bookmarks span the topics of AI, cybernetics, logging &amp;amp; resilience.
Panic: A serendipity engine I&amp;rsquo;ve recently been starting to familarise myself with the field of cybernetics, and this led me to a great piece of interactive art, called Panic. It was shown at the Australian Cybernetic exhibitation in late 2022. Panic took text input from the viewer, and ran it through three models: the first enriched the input to make it more descriptive, the second generated an image from the enriched text, and the third generated a text caption from the image.</description>
    </item>
    
    <item>
      <title>From the Bookmarks: December 2022</title>
      <link>https://elliotmassen.com/posts/from-the-bookmarks-december-2022/</link>
      <pubDate>Thu, 05 Jan 2023 13:01:43 +0000</pubDate>
      
      <guid>https://elliotmassen.com/posts/from-the-bookmarks-december-2022/</guid>
      <description>December came and went before I knew it! And now here we are in 2023 all of a sudden.
Thankfully I still found some time to read some interesting articles. Here&amp;rsquo;s a few bookmarks from December:
Little Languages Are The Future Of Programming &amp;ldquo;Little Languages Are The Future Of Programming&amp;rdquo; by Christoffer Ekerot highlights the importance of smaller, constrained languages that perform a small set of tasks well. The article makes an argument that although higher level languages are useful for many tasks, to solve a problem they require you to write an algorithm, whereas little languages (while providing less capabilities) worry about the algorithm(s) for you.</description>
    </item>
    
    <item>
      <title>From the Bookmarks: November 2022</title>
      <link>https://elliotmassen.com/posts/from-the-bookmarks-november-2022/</link>
      <pubDate>Fri, 09 Dec 2022 08:50:19 +0000</pubDate>
      
      <guid>https://elliotmassen.com/posts/from-the-bookmarks-november-2022/</guid>
      <description>I recently find myself spending a lot more time reading technical blogs, and coming across many insightful and interesting posts.
I tend to bookmark these so that I can revisit them later. I thought it could also be worth sharing a subset of them here every now and then.
Here&amp;rsquo;s a few bookmarks from November:
The Perfect Commit &amp;ldquo;The Perfect Commit&amp;rdquo; by Simon Willison was a nice refresher for me on how to frame a Git commit.</description>
    </item>
    
    <item>
      <title>Book thoughts: Capitalist Realism by Mark Fisher</title>
      <link>https://elliotmassen.com/posts/book-thoughts-capitalist-realism/</link>
      <pubDate>Tue, 09 Jul 2019 18:09:09 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/book-thoughts-capitalist-realism/</guid>
      <description>Preface This year I&amp;rsquo;m determined to get back into reading. I used to read a lot when I was younger and, unfortunately, it&amp;rsquo;s a habit that I&amp;rsquo;ve lost over the years.
At the start of the year I decided that I&amp;rsquo;d like to read at least three books this year, and decided to track my progress on Goodreads. I&amp;rsquo;ve had more free time since finishing uni, so I managed to reach this goal ahead of time!</description>
    </item>
    
    <item>
      <title>Blog update: January 2019</title>
      <link>https://elliotmassen.com/posts/blog-update-jan-2019/</link>
      <pubDate>Wed, 30 Jan 2019 21:21:55 +0000</pubDate>
      
      <guid>https://elliotmassen.com/posts/blog-update-jan-2019/</guid>
      <description>I thought it would be worth posting a short update on the status of this blog as I haven&amp;rsquo;t written in some time.
To be honest, I feel like I haven&amp;rsquo;t had too much interesting to say. In my early monthly updates, I felt as though I was tracking a journey. I enjoyed documenting the ups and downs and am glad to have them to look back on. I feel as though I&amp;rsquo;m in a slightly different place in my life now than I was then.</description>
    </item>
    
    <item>
      <title>Questions about Open Source in light of the Lerna license amendment</title>
      <link>https://elliotmassen.com/posts/questions-about-open-source/</link>
      <pubDate>Fri, 07 Sep 2018 08:02:42 +0100</pubDate>
      
      <guid>https://elliotmassen.com/posts/questions-about-open-source/</guid>
      <description>Last week, a pull request was opened on the Lerna repository. The request amended the license to prohibit use by sixteen entities who are known to collaborate with ICE (US Immigration and Customs Enforcement) and was merged.
In this post I’d like to explore and hopefully start a discussion about the questions this raises about Open Source software (OSS) and highlight issues within the tech community. Most of all I’m interested in a discussion around the idea that politics and software development are mutually exclusive.</description>
    </item>
    
  </channel>
</rss>