The following paragraphs come from the ACM Code of Ethics. When joining ACM, members are required to agree to read, understand and agree to follow the standards set forth in this document.
2.6 Perform work only in areas of competence.
Computing professionals are in a position of trust, and therefore have a special responsibility to provide objective, credible evaluations and testimony to employers, employees, clients, users, and the public. Computing professionals should strive to be perceptive, thorough, and objective when evaluating, recommending, and presenting system descriptions and alternatives. Extraordinary care should be taken to identify and mitigate potential risks in machine learning systems. A system for which future risks cannot be reliably predicted requires frequent reassessment of risk as the system evolves in use, or it should not be deployed. Any issues that might result in major risk must be reported to appropriate parties.
A computing professional is responsible for evaluating potential work assignments. This includes evaluating the work's feasibility and advisability, and making a judgment about whether the work assignment is within the professional's areas of competence. If at any time before or during the work assignment the professional identifies a lack of a necessary expertise, they must disclose this to the employer or client. The client or employer may decide to pursue the assignment with the professional after additional time to acquire the necessary competencies, to pursue the assignment with someone else who has the required expertise, or to forgo the assignment. A computing professional's ethical judgment should be the final guide in deciding whether to work on the assignment.
Whether we like it or not, there are times when we're asked to work on something we have no business dealing with. Whether it's holes in our knowledge of security, fundamental algorithms, or the legal and ethical space we're in, it's important to step back and ask ourselves whether we, specifically, should be doing this.
My emphasis is on the word "we", referring those of us on the line. Even though the line workers rarely suffer the consequences of incompetently designed, built, or tested software, ACM states we have an obligation to perform our respective jobs with competence, and be unafraid to speak up when we feel "in over our head". It is up to us at all levels and disciplines to either train up to the level needed, or bow out of a situation when we don't have the competence to complete a task.
Providing well-executed artifacts starts with us: if we are to be counted as professionals, we must behave professionally.
The machine learning clause
I find the ML case interesting. I think about the potential upside about neural nets often enough. Thus far, I cannot think of a single case where I feel it would be worth learning, not because I don't believe in their application, but due to the opacity of the underlying systems.
Even though a number of the learning systems are open source, I find the density of the theory and its application beyond my abilities. It looks like dense mathemagical stuff to me. There are tools that employ ML algorithms to quantify or qualify information for non-experts. How do they know an algorithmic choice is correct? This is a danger close situation, and many people don't understand how the smallest of failures can have the largest of effects.
At 1.3 seconds before impact, the self-driving system determined emergency braking was needed. But Uber said, according to the NTSB, that automatic emergency braking maneuvers in the Volvo XC90 were disabled while the car was under computer control in order to “reduce the potential for erratic vehicle behavior.”
Even though this incident dealt with "erratic behavior" with what I would call "more erratic behavior", someone in a position of trust defeated that feature. This story is a perfect illustration of the non-expertise of efforts and the network effects of poorly-understood systems.
Regarding Uber's safety hire, their choice was not to hire an engineer in the field:
The company did not directly comment on the NTSB findings but noted it recently named a former NTSB chairman, Christopher Hart, to advise on Uber’s safety culture.
As a pilot, Mr. Hart is qualified to fly an airplane and understand their safety systems sufficient to operate specific complex machinery. As a former director within the FAA and NTSB, he's qualified to serve on a board or lead a division of one of those agencies. As a lawyer in his field, he's qualified to practice law.
I would feel better knowing that an expert in the specific field of system safety and reliability engineering was at the helm of the effort. I'm not slagging Mr. Hart off here, he probably has a lot of experience in his years at the agencies that could help. I just wouldn't put him up over a expert practitioner of the field.
ML is a high-stakes game, so what about "business basic" stuff? A lot of people think line of business software or SaaS offerings is boring, because they're "not life or death" in scope. There are enough tales of catastrophic failures of critical systems out there. What about little failures?
What about the open access to S3 buckets that lays PII bare to the world? What about accidentally deleting S3 buckets, losing all business value contained therein? What about flipping the wrong flag and taking US East down? What about the decision to trust these systems? Without a backup? What about storing the backup on the same service? And what of never testing the backups? These are all questions of expertise and experience.
It is important to recognize when we're unqualified to deal with these systems. If you feel you're in over your head, state unequivocally that you're uncomfortable with the task at hand and you need additional training or help to solve it.
This is the hardest part:
A computing professional's ethical judgment should be the final guide in deciding whether to work on the assignment.
Depending on severity you must be ready to push higher, blow the whistle or find employment elsewhere.