Hypothesis: ChatGPT trains on humans' neural networks, not just data

TL;DR: Human-like responses of ChatGPT or BingBot may be an indication that the massive language models are training not just on data but also on human opinions to provide even more human-like results. It’s interesting to see how people react on seeing generated responses of ChatGPT. Most reactions I’ve seen vary between a sceptical acceptance to a complete awe. The common theme is, though, that the responses are amazing. That becomes even more apparent in people who know that GPT models are “just” predictive text apps.

Quality Architecture

The other day I was chatting to a good friend of mine about a webinar that she had watched. It was about testing strategies, such as testing pyramids and inverted testing pyramids. She wasn’t convinced that simply shifting the bulk of testing from unit tests to end-to-end tests can dramatically improve the quality of software. Her point was that after adopting such strategies the tests will become more complex and brittle, and that’s expensive to support.