A cybernetic system is one that holds the capacity to rectify its operations through mechanisms of self-reference in response to changes in the environment it exists in. It is plain to see that human beings can be considered cybernetic systems, insofar as we can change our behavioral patterns in response to changes to our environment, whether that environment be societal, cultural, or natural. These changes can manifest in a variety of ways, such as, in response to natural phenomena outside of the human system proper, increases or decreases in metabolic efficiency, the building up of resistances to various toxins, and so on. In societal and cultural environments, however, these changes can take more subtle forms: they might manifest as one’s decision to wear different clothing, to alter one’s posture, or to affect a different accent, but these are all relatively tame examples. Changes in the ethical or moral structure of a societal or cultural environment can induce a change in a system’s morality: that is to say, if some new paradigm of ethical thought becomes the norm in a given societal or cultural environment, such as the notion that human beings are created equal, or that it is better to focus on one’s immediate surroundings than the world as a whole, then there seems to be a tendency in human systems to adopt these new paradigms in pursuit of a more comfortable degree of survival in that environment.
Writ short, if everyone around you thinks that something you find to be morally permissible is in fact not, chances are good that you will change your mind. This is a tactical and cybernetic move: it involves the recognizance of your position as an individual with a limited degree of power in an environment that, at the end of the day, probably wants you dead, and an awareness of the overwhelming power of everyone else. This poses significant problems for the philosophical consideration of ethics: the introduction of, as it were, peer pressure to the decision to adopt certain ethical paradigms over others poses the challenge of ethical relativism. If certain systems of ethics are more popular and therefore more accepted than others out of the desire for nothing more than to stave off ostracization and bodily harm, there seems to be some element of coercion at play in any ethical system, which, for most systems, runs the risk of hypocrisy.
A potential solution to this quandary is to assume that there is some larger, more generalized ethical framework in which more particular frameworks are generated. In other words, to assume that there is a metasystem of ethical systems. This assumed metasystem, in order to be valid and legitimate, would have to necessarily be founded on some form of absolute morality, or at least some form of morality that is universal to every human being. The desire to survive is more likely than not the bedrock of this morality: the biological imperative of existence is, by induction, to keep existing for as long as one can. The desire to maintain life is functionally a desire to revoke entropy, to eliminate the possibility of death.
The revocation of entropy is not the sole groundnorm of this cybernetic ethical (or cyberethical) metasystem: there are two others. The first is the pursuit of accurate and valid information, a norm present in every agency-based ethical framework, and the second is the imperative of increasing technological, spiritual, mental, and human development, extrapolated from the ethical imperative to defeat entropy. We will consider both in turn.
For any action with a clearly defined goal performed by an agent with a given moral and ethical framework or hierarchy, there is a way of performing that action in such a way as to maximally satisfy that agent’s moral and ethical framework, while simultaneously minimizing the amount of effort and resources required to perform that action. This notion, broadly, can be considered an ethical efficacy. In a sense, it is a kind of practical utilitarianism: a way of maximizing pleasure (the performance of the action in such a way as to achieve the desired goal with a maximal degree of adherence to the agent’s moral and ethical hierarchy) while minimizing pain (the effort and resources required for the performance of the action). While two agents with differing moral and ethical hierarchies may undertake an action with the same goal, they can have very different ways of achieving their ends in such a way as to maximize their ethical efficacy. However, no matter what framework the two agents might have, both require an understanding not just of their own frameworks, but of the environment they are acting in: otherwise, they cannot reliably determine what action will be maximally ethically efficacious for them. This understanding is contingent on the accuracy and amount of information those agents possess or can acquire in regards to the environment and framework they exist in. Practically and tactically speaking, it is always better to possess more information about a situation than less: and if the notion of ethical efficacy holds true for all ethical frameworks, we can consider its criterion, the pursuit of information, to be a criterion for all ethical actions. Thus, the pursuit of information is shown to be a tenet of the cyberethical metasystem: it is present in any and all ethical frameworks, and all ethical systems are necessarily concerned, to one degree or another, with the acquisition, control, and pursuit of information.
The imperative of development is formed out of a synthesis of the information and entropy-rejection imperatives: while a perfect rejection of entropy would consist in a form of absolute and perfect stasis, where no heat is wasted, no motion is possible, and no one can die, this stasis cannot be applied to thoughts. If there are an infinite number of thoughts that a human being can have, and every thought that a human being can have can generate a new thought when related to one or more other thoughts (or sets of thoughts), then each time a thought occurs that has yet to be considered, in actuality another infinity of thoughts occurs simultaneously. In other words:
+ 1 <( + 1)
Where represents a countable infinity of thoughts, and 1 represents a new thought. + 1 therefore is the generation of a new thought in perfect stasis, while ( + 1) is the generation of a new thought outside of stasis, where relation is possible. The entropy-rejection imperative cannot be embraced in totality for the simple reason that the amount of information that can be derived from an environment in stasis is less than that in an environment that is not in stasis: as such, there exists a cyberethical imperative that humanity must keep thinking, and thinking more self-referentially.