Why LLM is slop and what can be done about and with it

Continuing the discussion from [TUTORIAL] How to install Claude Code on Haiku – because apparently waiting twenty years wasn't enough:

Buzzword Confusion

While the buzzwords that “Artificial Intelligence” and “Machine Learning” represent are deceptive, they actually do have a few, but only a few, uses in software development: setting up a rough draft quickly and discovering precedents and lack thereof. They are based on a pattern-matching algorithm as an extreme form of “autocomplete”.

Application of LLM as Gate Layout Tool

I have found out that some of my techniques about dealing with non-Von Naumann architectures within a session of development of a microarchitecture of a CPU using a graph instruction set on SuperGrok using VHDL.

I don’t know how to do gate layout in VHDL but pattern matching is a useful way to find out common mistakes. Also, other solutions that were tried back in the late 80’s when MIT was developing its Monsoon architecture exposed a critical flaw in the design process that had nothing to do with the chip: the compiler technology of the era sucked. If I could splice the parsers of the era onto an MLIR dialect using LLVM as a backend, then remove the unnecessary stages after the DAG (directed acyclic graph) creating a graph instruction set as an artifact of Von Naumann code generation, I suspect there would have been nothing wrong with the Monsoon chip.

Summary

Looking for precedent and common practices as a means of accelerating humanity’s software development is helpful use of the pattern-matching algorithms of LLMs. Expecting it to do homework for you is still fantasy.

1 Like

2 posts were merged into an existing topic: [TUTORIAL] How to install Claude Code on Haiku – because apparently waiting twenty years wasn’t enough