The U.S. government is taking proactive measures to enhance their cybersecurity systems in light of the large ransomware attack on a major oil pipeline in 2021. For instance, in regards to the railway system, the new directive requires railroad carriers to (1) implement network segmentation policies and controls to allow the continuous operating of systems in the event of a breach; (2) create access control measures to prevent unauthorized access to systems; (3) implement monitoring and detection policies and procedures to detect and prevent security flaws and vulnerabilities; and (4) apply security patches and updates for all critical systems in a timely manner, among other requirements.
By February 2023, railroad carriers must submit a TSA-approved Cybersecurity Implementation Plan that describes how the carrier plans to comply with the new directive. The directive also requires railroad carriers to establish a Cybersecurity Assessment Program and file annual compliance assessments with the TSA.1
The White House recently released principles to work on when implementing AI decision making. They call it the Blueprint for an AI Bill of Rights, released on October 04, 2022. For example, IRCC understands there can be unintended biases resulting in unfair decision making. Likewise, the White House understands this very well and so they implemented certain principles. For instance, 1. Develop safe and effective systems where input is gathered from various sectors of society and the economy to “ensure to identify potential risks and impacts from the systems. The Blueprint also recommends that AI and automated systems to be designed to allow for independent evaluation, including by “researchers, journalists, ethics review boards, inspectors genera, and third-party auditors.” 2
Implementation of algorithmic discrimination protections, wherein proactive equity assessments are performed on a regular and consistent basis; 3. Data Privacy, which involves protections against the over collection and abuse of personal data; 4. Notice and Explanation: should provide clear explanations on how an overall system functions and provide notice that such systems are in use; 5. Human Alternatives, Consideration, and Fallback: should include access to an individual who can review and remedy an error or issue and where appropriate, the ability to opt out of an automated system and deal with a human without being disadvantaged.
This last principle is of note. If the AI system is making errors, why not allow an opting out? Makes sense.
The new AI rules that is before Parliament currently should consider the opting out idea as discussed in the White House Blueprint.
David H. Davis of Davis Cyber Law specializes in strategic risk management, incident response, privacy & data protection, and advocacy. He can be reached by email at david@daviscyberlaw.com or by telephone at 204-956-2336. We are also on the web at www.daviscyberlaw.com