EC-Council Plagiarizes Secureworks

Fri 25 Jun 2021 11:17:46 PM EDT

On June 22, 2021, Alyssa Miller Tweeted that EC-Council had plagiarized another blog. This follows her noticing that they had plagiarized one of hers as well as a long history of plagiarism in general. In this instance, the original content is a blog by Secureworks on January 6, 2021, authored by Pierre-david Oriol and Serge-olivier Paquette. The plagiarized content is a blog by EC-Council, not attributed to a specific person, published on June 11, 2021.

Given EC-Council's history of plagiarism that has been documented, we will not do an exhaustive analysis of this blog. Instead, we'll show a few sections side-by-side with words highlighted. The frequency and order of the words, along with the general content organization, make it clear that EC-Council copied the content. In addition to the highlighted bits, which show the structure of the plagiarism, you can see simple word substitution that is common in plagiarism like this.

SecureWorks EC-Council
AI is an umbrella term that encompasses several areas of advanced computer science, everything from speech recognition to natural language processing, to robotics, to symbolic and deep learning. AI technologists are constantly striving to automate seemingly “intelligent” behavior, or put differently, programming computers to do historically human tasks. AI is a blanket term consisting of numerous advanced computer science areas ranging from voice detection to typical language processing, robotics, and deep representational learning. Scientists and researchers aim to automate intelligent behavior in machines that are capable of doing human tasks.
One AI component used extensively in many applications is machine learning: algorithms that leverage historical data to make predictions or decisions. The more ample the historical data, the higher the probability the prediction will be useful or accurate. As more historical data is gathered, the machine learning engines predictions improve, or in the vernacular of pop culture, the application "gets smarter." For example, a machine learning-based application identifying the probability of lung cancer from an X-ray can make a prediction from a historical data set of 10 X-rays, but that prediction's accuracy will be negligible. A single AI component used expansively in several applications is machine learning - the algorithms that support historical data/information to forecast or make decisions about a particular action. More extensive the historical data, the machine learning's decision-making capabilities improve and make better and accurate predictions about situations or circumstances and are termed as getting smarter. It advances with time and without human interference.
Finding all assets is the foundation of an effective vulnerability management program, especially those assets that may appear atypical in a given context. Using conventional detection mechanisms, given the sheer number of assets in a typical network, , it can be difficult to find network assets that are contextually out-of-the ordinary. For example, a server that hosts many websites or services, a workstation in a subnetwork full of servers, or a Linux server in a network of Windows machines with database services running. These kinds of assets should be considered particularly crucial, and as such deserve more attention from security teams. It is relatively important to detect all the assets/devices for an effective vulnerability assessment, especially those atypical/uncategorized in a given context. Conventional methods are not efficient to detect uncategorized information/data/assets, such as a Linux server in windows machine with database services. These types of conditions require at-most priority from security teams.
An element of vulnerability management often unappreciated by those outside the field is the challenge of vulnerability detection. Determining whether an asset is configured such that it has an exploitable vulnerability can be more art than science, and the process is susceptible to a high frequency of false positives. AI can be employed in this part of the vulnerability management process to help reduce the number of false positives, essentially "detecting the misdetections." Factors such as services running on the asset and the detection mechanism that flagged the vulnerability can be used to assess the probability that the identified vulnerability is, in fact, a legitimate one. And, as the experience of the AI system increases over time, its ability to accurately predict false positives versus legitimate vulnerabilities will improve. It is crucial to determine whether a vulnerability is exploitable or not as the process of vulnerability detection involves a high range of false positives. AI methods and techniques can be implemented in detecting the vulnerabilities, which significantly reduces the number of false-positive outcomes by detecting the misdetections. Various services like services running and others, and the vulnerability which was flagged as a result of the detection method, are used to confirm the legitimacy of the identified vulnerability. With experience, the ability of AI machines can accurately detect false positives from legit vulnerabilities.
All modern vulnerability management products today are either cloud-based or have a cloud-based component. Although there are myriad benefits to a cloud-based vulnerability management platform, one of the most valuable (yet typically underappreciated) is the user data that can be anonymized and culled from the application. Every organization is often remediating vulnerabilities on multiple assets daily. Multiply several daily remediation activities across dozens, hundreds or thousands of customers, and a cloud-based vulnerability management product has a rich data source on which to apply an AI engine. Using this ever-changing and growing data source can reinforce or contradict conventional vulnerability remediation prioritization. Which assets are enterprises patching the most frequently? Which vulnerabilities appear to be the most concerning to peer organizations? Which are lower priorities? We all learned in high school that copying one classmate's answer on a test question is not only unethical, but a risky proposition given there's no assurance that you picked the right classmate to copy. However, if you could determine that 90% of the class chose a specific answer, you'd have significantly more confidence the answer was the right one. Applying AI to actual vulnerability remediation data across multiple organizations can yield insights based on the collective judgement of many hundreds or thousands of IT and security peers, and as discussed previously, the larger that peer group grows, the higher the probability the decisions are sound. Using a machine-learning technique known as Gradient Boosted Tree Regression, user behaviors and preferences can be blended with their history of remediation to predict what is important (for example, clickthrough rate). Using this ever-expanding database of cloud-based users and their remediation activity, the contribution to the vulnerability risk score becomes a dynamic element that reflects the constantly changing nature of the threat. Every contemporary vulnerability assessment product has cloud-based components in them, and some are completely cloud-based. Cloud-based vulnerability assessment/management platforms are extremely beneficial. One of the most important benefits is the anonymization of user data which can be reduced and discarded from the applications. Every single organization is regularly remediating vulnerabilities daily. Several remediation procedures over several customers are performed, and cloud-based vulnerability assessment products have a rich data source on which AI engine can be used. The source undergoes constant changes due to various factors and collects data. This can either strengthen or contradict the conventional remediation methods of vulnerability prioritization. AI can be applied to actual vulnerability remediation data, resulting in yielding insights based on various sources' shared judgments. Gradient Boosted Tree Regression is a machine learning technique that, when combined with user behavioral patterns and preferences, results in predicting what is essential, which helps understand and remediate vulnerabilities.

main page ATTRITION feedback