There are no such things as technology problems, only people problems.
No technology can build itself, nor use itself, nor correct its own problems. Even self-replicating machines, built using any technology in use (or even in conception) today, would merely execute the delayed choice of their builders. Consider the case of a man, eager to protect his home against theft, who installs an anti-theft device which would kill any unwanted intruder, perhaps with a bullet to the head. The homeowners’s device is commonly called a booby trap. One day, while the home owner is away, an intruder enters the home and is killed. Is the home owner responsible? You betcha! The home owner may claim they are not responsible because they did not pull the trigger directly, but in the end they made a choice to apply extreme prejudice to any intruder and they developed a device to execute that delayed choice. The homeowner’s booby trap did not kill the intruder, the home owner did. Every action of any technology, including any act of construction, any act of repair, or any act of use, is ultimately the extended action of human beings.
No technology is a perfect fit for any problem and all technologies come with trade-offs associated with their use. Even survival comes with its own set of trade-offs. It is the responsibility of human beings to understand their problems to the best of their abilities, to understand the trade-offs associated with the technology options before them, and to choose appropriate technologies wisely. Trade-off balancing does not happen on its own. Humans are the ultimate arbiters of which technology problems they choose to live with.
If all humans were to vanish from this Universe tomorrow, there would be no human problems of any kind. Human technologies would instantly cease being human technologies and would merely exist as artifacts of matter like any other. At the same instance of Universal human extinction, all “problems” would also similarly vanish.
This is not merely an academic exercise in ethics. The implications of failing to understand this point can be tremendous. If the home owner in my delayed choice example would have understood his culpability ahead of time, would he have been so eager to create his intruder-killing device? The lack of understanding of the concept of delayed choice leads, in business, law and in politics, to a class of problem called moral hazards. Failure to understand this critical point about technology, in particular computing technology, can cause some people to impart “magical” qualities to technologies which the technologies do not have, which can skew expectation, and can lead to project and business failure.
No, no, no. The only kinds of problems which exist in this world are people problems, by definition. If you doubt that, then find a way to kill all of humanity right now and watch all problems simply vanish away the moment before you and I cease to be.