Then there are alpha+ level AIs, the ban of which is so important to the Hegemony that they went to war with TT over it twice. That implies that they consider alpha+ AIs extremely dangerous. Which in turn implies that they have the potential to threaten the Hegemony or even humanity at large. There are two factors an AI has to fulfill to qualify for that. It has to be:
- Unconstrained, free-thinking, general purpose
- More intelligent than humans
I disagree. You do not need something to be unconstrained, free-thinking, or general purpose, or for it to be more intelligent than human average for something to qualify as extremely dangerous - many large terrestrial carnivores, for example, are potentially extremely dangerous to even a prepared human if the human is in that animal's preferred environment or unintentionally comes close to the animal; there are more reasons than simple competition or need for food that have lead humanity to do its best to drive out, hunt out, or exterminate such animals over much of humanity's range and history. Moreover, it is not necessary for something to be a threat to the Hegemony or humanity at large for it to have been worth a war to the Hegemony; something as simple as an irreconcilable difference in ideology could be sufficient to cause a war, and high-end AI development would certainly appear as though it'd be unpopular with the Hegemony's large Luddite population (heck, the game even tells us that the treaty restrictions imposed by the Hegemony at the end of the first war bought the Hegemony's government some approval from its Luddites).
Beyond that, there are reasons beyond simple threat to ban the development and use of machinery with human- or near-human-level sentience and sapience. If your desktop computer is about as sentient and sapient as you are, is it morally or ethically acceptable for you to buy or sell it? Upgrade it without it having a say in the matter? Sell off parts of it that you don't really need anymore or at the moment? Turn it on or off at your convenience? Require it to perform whatever task you ask of it, regardless of its preferences or the cost to its well-being? It's a machine, initially built for the express purpose of being a tool, but it's also something which, in the absence of knowledge of the form of the entity, could be mistaken for a human. Banning alpha+ level AI development and use could be as simple as a natural extension of the modern world's abhorrence of human slavery; mass production of a sufficiently sentient and sapient machine is not in any significant way different from industrial-scale commercial human cloning. The 'product' of either process is something which most likely cannot legally, ethically, or morally be sold as a commodity*; most people probably agree that whether you came out of a test tube, a cloning vat, or a woman does not matter when it comes to your legal rights. An entity which is sufficiently sentient and sapient to be (nearly) indistinguishable from a human in the absence of information on the form of the entity could reasonably be expected to be granted the same set of rights. People have fought wars over much more minor issues than whether or not an entire category of entities are deserving of the same (or at least similar) treatment as humans (of course, people have also ignored such major issues for a variety of reasons, ranging from money to common enemies to an earnest desire on the part of all, or at least most, parties to at least avoid open war even if they can't live in perfect harmony).
Personally, my feeling is that the reason that alpha+ level AIs are banned is because they can pass the Turing test (or some other test or tests of sentience or sapience), not because they're all supergeniuses, and the reason why beta-level and lower AIs are not banned is because they cannot pass the test. It's the AIs that are capable of passing the test that can pull off the revolutions that science horror fiction likes to feature, it's the AIs that are capable of passing the test for whom treatment as an object, commercial product, or lower lifeform is at its most questionable from a moral, ethical, and legal standpoint*, and it's the (mass producible) AIs capable of passing the test which are capable of causing the most economic disruption as they're the kinds most capable of putting the largest fraction of the workforce out of work (though you'd arguably be better off with a set of specialized lesser AIs than something which is more or less an artificial human).
Fear of supposedly superhuman AIs can work as a reason for the AI Wars of Starsector, but I don't feel that it's necessary. Fear of economic disruption, moral dilemas, and a need to play for public support all work just as well, especially if the Hegemony had been looking to bring the Tri-Tachyon Corporation "into line" for other reasons and this was just a convenient excuse. As far as I'm aware, we don't know what the causes of the First AI War actually are; the name and the knowledge we have of the terms imposed by the treaty imply that it had something to do with the TTC's AI development and usage, but it's also possible that that's just what the official Hegemony (and possibly also TTC and independent) histories want people to focus on; certainly all indications are that a war justified on the grounds of 'immoral technology' is going to be relatively popular with the Church of Ludd, the Luddic Path, and the largely-Luddite population on many Hegemony worlds, whereas a difference of opinion over who has tech mining rights or a territorial claim on Exar Secundus might not be so popular or as in line with the historian's worldview.
*Under most modern codes of law, ethics, and morals.
Could Alpha+ mean "Alpha or above"?
Yes.