May 7, 2019 at 12:00 am
Comments posted to this topic are about the item A Double Failure
May 7, 2019 at 8:22 am
The NetFlix link posted a few editorials ago shows that even pseudonymised data is vulnerable. Internally we colour code our data
As our use and understanding of data has evolved we have come to realise that data colour has a tendency to bleed towards the red end of the spectrum. Rather than introduce a decision point as to whether data is sensitive or not just treat it all as if it was sensitive. That way there is no scope for mistakes caused by ambiguity. Where pain points in process and bureaucracy exist due to such classification then we should strive to address the process and bureaucracy rather than the data security classification.
May 7, 2019 at 2:06 pm
We keep hearing about data breaches resulting from data being exposed on the internet, meaning that a web search could access the data in un-encrypted form. There is more to building a startup company than convincing thousands of customers to dump their documents on your website. Apparently the secret to success for many startups is taking shortcuts.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 7, 2019 at 4:12 pm
The idea of using colors is interesting. It's a gross way of looking at data. Do you color the columns, tables or databases? In our Data Catalog work with customers, many want something more granular, capturing the ways in which data is categorized, but also who might be responsible for making decisions. It's caused us to open our taxonomy quite a bit to allow for multiple places where classification is applicable.
May 7, 2019 at 4:15 pm
We keep hearing about data breaches resulting from data being exposed on the internet, meaning that a web search could access the data in un-encrypted form. There is more to building a startup company than convincing thousands of customers to dump their documents on your website. Apparently the secret to success for many startups is taking shortcuts.
The key to moving fast is taking shortcuts, but what I really think is that we don't have a good mindset when we start to learn about programming. Too many people work in a wide open sense with their data and applications, thinking about logins and security later. Somehow, we need to force people to think about security early on.
We also need to prevent any sort of server existing without a password. This should have been security 101 in 1970.
May 7, 2019 at 6:57 pm
Perhaps, as in genetics with the recent CRISPR scandal, new ventures with AI / machine learning (that require massive large datasets that can't be easily manufactured), should be regulated and prohibited. No passwords and unsecured production data in non-production contexts calls out underlying problems. In my experience, administration of non-production environments can be more challenging, but often goes under the radar.
Maybe due diligence on the part of venture capitalists investing in 'big data' projects should include 'privacy provisions', just as any new oil drilling venture would require environmental impact proofs. As the cliche goes, 'data is the new oil'.
This approach might even bolster the role of data professionals in an era where PaaS and IaaS make it very easy to 'get started'. I am sure many of us have seen a serviceable prototype rushed into production to begin the exploitation, before the solution is stripped of its mirrors and chicken wire. Startups by their nature are going to be pushing aggressive schedules potentially without the protective compliance infrastructure established concerns take for granted.
Evisoft might crash and burn losing investor cash, but what about the owners of the data that has been compromised? The big stick of GDPR can dole out very large fines, but it doesn't deal with the root causes. AI / machine learning is perhaps the gold rush of the moment and we can do without a wild west approach to all our data that we know is out there (and especially the data we don't).
May 8, 2019 at 1:51 pm
Here in the US, the Federal Trade Commission (FTC) mandates that corporations report data breaches, and other agencies enforce HIPPA and Sarbanes-Oxley Act (SOX). But is there an agency (the digital equivalent of the EPA) that proactively monitors the internet for data dumps and unsecured databases?
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 8, 2019 at 1:59 pm
I'm not sure the FTC has a general reporting rule. They have rules over health data and there was (maybe law?) a draft on credit agencies, but I don't know what the reporting requirements are.
I doubt there is an agency looking for data on the Internet, more likely they expect journalists to find things and report them, at which point they can take action.
May 8, 2019 at 3:52 pm
I'm not sure the FTC has a general reporting rule. They have rules over health data and there was (maybe law?) a draft on credit agencies, but I don't know what the reporting requirements are. I doubt there is an agency looking for data on the Internet, more likely they expect journalists to find things and report them, at which point they can take action.
So, the EPA and FTC don't necessarily depend on journalists or whistle blowers to inform them of situations that might require their attention (although that's certainly where they get a lot of their information). Perhaps we've reached the point of scale where securing data in the private sector is every bit as relevant to public safety as say hazardous waste or the financial system. If an company or individual needs a special license to legally stockpile agricultural fertilizer, then why not a license to collect and warehouse credit card numbers, SSN, personal medical history, and cell phone tracking data?
We need laws that regulate the mere possession of personal data regardless of context with which it's used. The regulatory agency can make inferences about whether a specific company is in possession of illicit data by analyzing the business model, partnerships, and functionality of their apps and website. Just like the IRS, the agency can then step in and investigate the organization's operations and database records.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 8, 2019 at 3:58 pm
That's interesting. The idea that you need a license to collect personal data. That might help ensure startups think about this stuff early. Right now a new company is essentially completely unregulated.
June 20, 2019 at 9:54 am
I just want to share a link of an article where Evisort responded back on this data breach: https://www.artificiallawyer.com/2019/04/23/legal-ai-firm-evisort-responds-to-database-exposure-security-claims/
where it is claiming that it was the part of internal development and database is not part of their production environment which they use for clients.
So, for an anonymous tip, TC is ruining the whole reputation of business. I understand TC is posting articles only about scams and TC works in that way only. Techrunch is a big giant in tech world and obviosuly Evisort nowhere exists with you. But if any business is responding to your article then you should also consider updating that link. I am not sure if somebody from Company approaches you directly but I would advise you to read this article(https://www.artificiallawyer.com/2019/04/23/legal-ai-firm-evisort-responds-to-database-exposure-security-claims/)and then make the changes in your publication.
Company is really doing a fantastic job in serving the clients but this article is affecting them bad. I would like to convey to visitors of company to consider them for Data security.
June 20, 2019 at 3:58 pm
I'll certainly update the article with the link, as this might not be an issue, but still, exposing dev databases live is an issue. These are refreshed, and it's entirely possible that there was customer data, and either it was removed or a refresh of the environment didn't contain more. It's also possible that a future refresh could have exposed customer data.
I'm not implying this was the case, but it could happen and a company might dispute the findings after removing data. I didn't acsess things, so I have no idea if this was a legitimate exposure of customer (or internal) data, or a case of poor journalism (or malicious reporting).
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply