Prompt sensitivity revisited: quantization and open source models

In this follow up Python post I reproduce the ‘prompt sensitivity’ issue I identified last year, in an OpenAI model, in an open source model running locally. I also discover that the quantization process, which shrinks models and can make them easier to run locally, is apparently responsible for this quirky behaviour. Because the closed-source model I used in my earlier blog, text-davinci-003, is no longer available this blog opens up a path for further, reproducible, exploration of this issue. ...

October 25, 2024 13 min

Sentiment analysis with the OpenAI API - Part 2

In this follow up Python post, I document experiments with OpenAI’s text-davinci-003 and GPT-3.5-turbo endpoints, including an unexpected model response to a small prompt change. ...

September 10, 2023 14 min

Open HESA financial data 3: comparing top 5 Scottish universities

Finally, in this R post I use the prepared HESA financial data to reproduce, and analyse, the 2020-21 “surplus/deficit for the year” figures for the top five Scottish universities. ...

November 20, 2022 10 min