AI Chat technology

Created by: Lester Caine, Last modification: 22 Feb 2025 (10:11 UTC)

My first pass on this was a blog post - Advanced Idiots not only Artificial - which was a bit of a rant. This is intended to be a little more factual.

I've so far played with three chat bots. The initial attempts were with Llama 3.1 from Meta and I ran a few threads while getting the feel for what it could and could not do. It told me that it's training set was from 2021 and so some of the history of events were lacking and while it could adapt what it returned based on my prompts, it lacked some key knowledge. The interface moved on to version 3.3 while I was using it but the answers did not change.

I then switched to Minstral AI 7B which told me that it's training set was from October 2023, so a little more up to date. It's tagged as open-source so not sure as yet who is hosting it. When asked to produce the history of CDDB which was one of my threads with Llama it produced a nice summary, but very much biased to the commercial situation. When prompted about key open source elements it promptly re-wrote the summary adding in those sources but essentially based on the original summary. Further prompting added a little more missing detail but it reached a point where the summary was clipped in length and while it repeated the missing final paragraphs, it insisted on trying to repeat the whole thing when discussing just how I should identify the source, missing the actual citation as it was beyond the display window.

The third trial was with GPT-4o mini which is from OpenAI and this is back to an October 2021 training set. I tried the same question about CDDB, "So you should be able to provide a history of the sources of cddb data used to identify CDs" and while it acknowledged the community origins, there was no mention of any of the open source projects. When prompted about that lack it produced a new response only addressing that point without reworking it's original response as Minstral had done! So I don't think I will be using this source going forward.

As a personal assistant I don't think any of these count. I can understand part of the 'privacy' issues which storing a history of interaction creates, and the problems that earlier 'models' had in upgrading their responses based on other discussions, but if I am using an 'Assistant', they should be able to remember facts like my medical problems and tailor responses appropriately. I think that a MAJOR failing of the NHS 111 service is that it starts from zero on every consultation. I should be able to access it via the security of the app and it remember what we had discussed previously. THAT would be the intelligent thing for it to do, and it's training set should be UK centric and leave out any specifically American bias. With all of this managed via data centres sited in the United Kingdom.