February 10, 2005 at 3:47 pm
Or more accurately, a virus in your anti-spyware. How about this for a new tactic? There are some reports that there is a virus that attempts to disable Microsoft's Anti-Spyware product, which is in Beta testing. That's really interesting. Rather than attacking your computer, attack the watchdog and go from there.
That's really something to think about and it should have all companies that product virus protection or anti-malware products a little worried. They should be double checking and verifying their code against attacks that might somehow disable or alter the workings of their products. I imagine there are people trying to get the source code for all products and taking a look.
Even Firefox isn't immune as there's a C|Net story looking at spyware for Mozilla and ultimately, Firefox. That's all I need. I switched away from IE for the most part because of security issues. Now I guess I might be heading for Opera or some lesser known browser that might escape the notice of those malicious individuals looking to write code that's disruptive or deceitful.
I'm not that surprised, in fact, I'd assumed it would happen at some point. I know a few of you didn't like my editorial on the MySQL worm, but I've always felt that the trust in security of products just because they're open source isn't necessarily valid. It doesn't mean they are not secure, but even if they are well written and tight, I see way too many patches coming through my security alert service for the various Linux and other open source products. Is everything reverified each time there's a new patch? For some vendors perhaps, but it's human nature to not constantly and stringently retest every single case when you patch one small, tiny little place in the code. And what are the possibilities that some malicious code slips in?
It's not that I'm paranoid, but there are lots of people out there that don't adhere to the same moral code that I do and it's only a matter of time before they find creative ways to disrupt the open source computing world in the same way they've done in the past. I'm just not surprised anymore when I see new and creative ways of wrecking havoc.
Like the nanny scam. Not to worry those of you out there with kids and a nanny or au pair, but this is something that happened in the US a few years ago. A lady took out an ad for a nanny. She interviewed many people, got references, resumes, etc.
Then she used those references and resumes to apply for a nanny job herself. Since there usually isn't an picture matching, she secured a few jobs over time, working for a week or so and then robbing the people before moving on. No kids were hurt, but that's the type of sick creativity that we see in the computer world more and more.
Caveat Emptor, even when the product is free.
Steve Jones
February 11, 2005 at 1:44 am
The FireFox bug is that the address bar does not display the full unicode character set so a unicode character may appear in the address bar as a standard ASCII character.
I think this bug is common to ALL browsers including OPERA.
February 11, 2005 at 7:11 am
Actually all virus vendors have seen various attacks including if I remember correctly Symantec have a buffer overrun flaw several years back and a virus was using that to hide itself from the scanner while it disabled it.
February 11, 2005 at 7:15 am
What I don't understand is how buffer overruns are found in proprietary software. I can understand open source stuff being found because you have access to the source code, but how does one find them in compiled code?
February 11, 2005 at 7:18 am
but how does one find them in compiled code?
Too much time and not much else to do, I guess
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 11, 2005 at 7:40 am
I still can't figure out why these obviously talented people spend their time writing code that disrupts other systems for free when they could probably be pulling down some serious cash writing valuable/productive code!!
February 11, 2005 at 7:43 am
The number of original hackers is a small percentage of the overall number of malware inventors. A large number are just script kiddies that piggy-back on the good hackers. They can't get a real job for real bucks.
----------------
Jim P.
A little bit of this and a little byte of that can cause bloatware.
February 11, 2005 at 7:48 am
Can't find the link right now, but I remember some time ago an 18?!? year old German pupil spreading some virus got now a job offering from an Anti-Virus software vendor.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 11, 2005 at 10:13 am
Good points.
The issue is a broader political concern: how to balance privacy with the right to know the identity of persons who are performing public actions (actions that are not personal but affecting others). The whole system of commerce is based on persons acting as free agents, contracts between free agents, and known identities.
I believe we have a right to privacy, but not a right to hide or disquise our identity when we interact with others. Extending the right to privacy to hide identity is the cause of the rise of fraud and malicious behavior, and violates reciprocity (a fundamental requirement for development of trust in society also known as the golden rule).
Eventually, the frustration of the IT world with spam and malicious code will restore the practical requirement that we know the identities of the people we conduct social and commercial transactions.
Dan Slaby
February 11, 2005 at 1:34 pm
Unfortunately I see this like the gun control issue - we can "force" everyone to identify themselves but the hackers (or criminals in the case of guns) can still spoof identities (or get guns illegally). The innocent and law abiding end up being at the mercy of the criminally minded portion of society!
Darrell (just in case someone questions my identity :hehe
February 14, 2005 at 2:18 am
Oops back to "smashing the stack for fun and profit" for you.
Buffer overflows remain the number one security vulnerability, and always have been as long as I can remember, in proprietary software.
There are automated, and manual methods of finding them in compiled code. Although for obvious reasons there isn't much of a legitimate market for such tools.
The big free software projects, like the Linux kernel, have very few of these kind of issues compared to proprietary code. The main reason being any budding software engineer looks around for a big code base to run their latest software validation tool on will find a big free software project and start testing. The Linux kernel is probably one of the most examined pieces of code on the planet, recent code quality audits show it has exceptionally low defect rate per line of code, which I suspect is mostly down to this level of scrutiny.
Having seen and consulted at many proprietary software companies, I can assure you that few operate at any level of engineering competence that would lead them produce better code than the free software model. The main exceptions being some of the telecoms companies, and a few operating system companies (I'm guessing MS is a mixed bag, given the appalling programs it is capable of producing, and the buried Easter Eggs in flagship products suggest inadequate controls).
Many (most?) of these problems in proprietary Windows code could be addressed by checking one check box in the VC++ interface, it says a lot for how much people value security that they don't tick it, I'll let you ponder why it doesn't default to on.
February 14, 2005 at 2:30 am
The big free software projects, like the Linux kernel, have very few of these kind of issues compared to proprietary code. The main reason being any budding software engineer looks around for a big code base to run their latest software validation tool on will find a big free software project and start testing. The Linux kernel is probably one of the most examined pieces of code on the planet, recent code quality audits show it has exceptionally low defect rate per line of code, which I suspect is mostly down to this level of scrutiny
One might question this. I'm on several security mailing list and have seen quite a few kernel vulnerabilities recently. I think this is a problem of every software getting more and more attractive. When Linux first started, there were no such things like viruses, now there are some. Anyway, be it as it is, no software is perfect. I guess there will always be bugs, security holes and something like that in one way or the other.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 14, 2005 at 3:03 am
As you say not bug free. The recent Standard study identified less than 1000 bugs using automated tools in 5.7 million lines of code, when they would have expected between 100,000 to 200,000 in typical code.
Unfortunately vulnerabilities are a measure only of how many bugs have been found, not how many remain, and whilst you can try and model how many remain from how many and when they are found.
Similarly the Stanford study finds a low bug count because the software has been extensively checked for such problems before using similar tools.
Viruses are distinct, as the environment has to be right for them to spread. A system can have more vulnerabilities, if proper engineering makes it hard for viruses to spread it will have less viruses. Viruses on Linux are still not such an issue that they need anything but generic fingerprinting products (ala tripwire) to spot unauthorised alterations, and I doubt they ever will be.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply