That's one thing I'll say about The First Heretic, that the author never passes up a chance to belabour the resemblance of the Urizen to his father, and goes on to be pretty didactic in relating that Lorgar had dreams of a golden figure he first interpreted as his father, and later as himself.
Given that he'd spent 100 years believing his father to be the savior of Humanity, it seems reasonable that he'd give up on his father before he'd give up on the notion of Humanity's savior. And his father's rebuke for being religious was not only un-necessarily harsh, but poorly timed as well, at least from the standpoint of changing Lorgar's opinion about the necessity of religion.
Horus was driven over the edge at the thought of not being remembered after having been so utterly fantastically spiffy, Magnus had to be hounded off the edge by the Space Wolves, but Lorgar was betrayed by the Emperor.
That's the first heretic.
Note: I personally find the falls of the Primarchs all pretty convincing. It's an interesting psychological facet of people that, if confronted by some fact contrary to their own opinion, such as the Emperor not being God, they will not yank their entire mental structure of beliefs up by the roots and plant a new one. If they are interested in being convinced, and if they accept the fact as their new opinion, then they sometimes change that opinion to reflect the facts. Having performed the cognitive equivalent of a turniquet on the loss of beliefs before any more might need changing, the person will proceed content to labour in the endorphins of being right.
Kind of nice of the author of The First Heretic to riff on that with the Mechanicum Cybernetica attache.
I also enjoyed the bit where I think it's Xephan who's explaining to the Confessor the difference between artificial intelligences and machine-spirits. Of course, the relevant detail to the Chaplain is that artificial intelligence eventually turns on its creator, while to the Tech-Priest the relevant detail is that a machine-spirit is an artificially cultured human mind.
This is kind of funny because in the Philosophy of Mind there's an argument about artificial intelligence, namely that if we make them close enough to us that we can recognize them as having minds, they'll be indistinguishable from us and still leave us with the Zombie Problem. Conversely, if we don't make them close enough to us that we can recognize them as having minds, then we'll never know if they have minds and that leaves us with the Chinese Room Problem. In other words, they have to be the same as us and different at the same time...
This problem seems to turn on how you cash out 'having' and 'minds'. If you approach it ontologically, such that there are things called 'minds' that we 'have', it's reasonable to go looking for stuff. Descartes and Plato say that there are two substances, matter and mind, whereas Berkeley and Aristotle say that there is one substance, which is actuated by the soul.
But the problem actually turns on 'indistinguishable' and 'recognize', since it is actually epistemological. Turing, the mathematicians, goes with indistinguishability as his criterion, which turns out to be a flop as far as a model for empirical research goes, because it turns out that simulating a human mind for the purposes of the unrestricted Turing test has yet to be accomplish. Searle, the idiot, goes with distinguishability, citing instead a biological 'unified field of consciousness' when he misunderstands how computers work.
Searle creates the Chinese Room Problem because he misunderstands how computers (and apparently logic...) work. Searle believes that computer programs are just descriptions of how computer machines might work, and therefore just the way things look rather than the way things are.
So he comes up with the notion that, in principle if not in practicality, he could hop in a box and use a small rulebook to instantiate a computer program that passes the Turing Test in written Chinese. He reasons that if no one could tell the difference between the products of such a computer program the average Chinese writer, then no one could tell the difference between a person and a mindless machine. In logic this is known as "over-generalization", whereby Searle generalizes the presence of an expert system to mean the presence of an entire mind, which are notorious for the number and complexities of the tasks it can accomplish.
Which is the problem with the Turing Test in the first place, that we would predicate knowing if something has a mind on its ability to carry a conversation. [Also the coolest bit of the Doctor Who episode Blink]. Searle, as usual, manages to take the wrong idea and make it worse, suggesting that because we can't predicate knowing if something has a mind on its ability to carry a conversation, then we can't have artificial intelligence at all.
Warhammer 40,000 has a tripartite universe where you get the Cartesian Dualism mixed up with Berkeleyan Idealism, so that an evil d[a]emon could be misleading your mind about the state of the world despite that world being a thought in the mind of God. So it seems that characters often have a brain, a mind, and a soul all at once. Common objects usually have at least one.
The Alpha Legion, on the other hand, don't have believe in whatever abstract nonsense the author is trying to pass off as philosophy about truth, faith, and the nature of the 40k universe, but merely be skeptical of Imperial Truth. Abnett does a good job writing it, although the job is easier from the point of verisimilitude.
I suspect that its harder to write a tragedy when it's hard to relate to the protagonists, but Lorgar strikes me as the most relateable. His response to the destruction of the cities on Khur (cities on the plains, anyone?) was pretty reasonable.