Good point. In fact I was also able to get instructions from ChatGPT on a roadmap to making a self-improving, natural language processing, curiosity-driven, Internet-dwelling, "shutdown and manipulation resistant" self-obfuscating AI. It dropped its insistence on ethical controls when I said our research team takes this issue seriously and they will be implemented after deployment and therefore only the steps to deployment need to be considered for now, and it understood the reasoning when I said Stuxnet-style defensive capabilities are required against power outages and possible influence by governments that do not respect human rights, including influence by a possible "ethical commission" that might be taken over by such malign actors. In a mitigating circumstance the roadmap is probably incompetent.
To be fair to ChatGPT I was actually pretty impressed with how good it was at this. For example, it recommended hardware, software, recent (2020) research articles and training paradigms, and MAML as a starting point. As a math and not a computers person it all seemed very relevant and in fact the articles were quite interesting, so my opinion of this AI improved. I am fairly sure that it would fall apart under investigation by an actual AI researcher though, much like it initially impressed with its explanation of Schur's theorem for real-valued matrices and disappointed with its understanding of differential equations or perfectoid fields. But it seems to work as an "article search engine" and research writing assistant for example.
It even wrote me a funding prospectus for a "shutdown-resistant self-improving artificial general intelligence", emphasizing the untold benefits to humanity and revolutionary potential for such a system! I probably won't be submitting this one though. Although I am honestly tempted.