Skip to Content

The Merck Malware Attack

As everyone will have seen, the last few days have brought news of yet another “ransomware” attack, this time from a piece of malware known most often as Petya. One unit of the (huge) container shipping company Maersk is known to have been affected, as is a branch of the French bank BNP Paribas, but I bring this up here because another company affected is Merck.

There were a number of reports on the first day of this situation about Merck employees coming to work to find locked computer screens, etc., and it appears that the company basically told people to just go home. As well they might – I go back just far enough to remember the director of chemistry in my first job (fall 1989) being mistrustful of the idea of every chemist in the department having a computer at their desk, but buddy, you just try to work without one now.

Ransomware is very bad news. It takes advantage of the fact that (for a long time now) encryption has been a lot easier than decryption. There are zillions of ways to turn a message into what looks like a pile of random digits, but only one key that will restore order. Back during World War II, British and American codebreaking efforts had some famous successes against the German and Japanese military and diplomatic codes, but the days of successful direct mathematical attacks are basically gone. They were already on their way out during the war itself. Many of the most valuable “breaks” against these codes occurred due to user errors and sloppy technique, and stealing a code book (or the equivalent) was still worth spending a great deal of time and effort in preference to trying a brute-force decrypt. Although there was an incident when an OSS team broke into the Japanese embassy in Lisbon in search of just such material, not realizing that the US Navy was already reading the Japanese “Purple” diplomatic cipher. Everyone in on the secret was terrified and furious that the OSS operation might cause Tokyo to switch its codebooks, which would have been a terrible blow. Not only did the US learn a great deal about Japanese plans by that route, but the Japanese ambassador to Berlin, Hiroshi Oshima, was the single most valuable source of info on the thoughts and plans of the Nazi hierarchy, via his long, extraordinarily detailed cables back to Tokyo. He died in 1975, never knowing that he had been the greatest unwitting spy of the war.

So much for the old days. Petya, unfortunately, seems to be using a perfectly good encryption algorithm, which these days means that you’re not going to decrypt anything. But it gets worse. Last night, two security firms announced that an analysis of the malware has convinced them that this isn’t a ransomware attack per se, because it appears that not even the people who are supposedly asking to be paid off will be able to furnish a decrypt key. Instead of generating some unique key based on the information it’s hiding, the software just trashes hard drive sectors and stores a random number. It’s designed to destroy data, not hold it hostage. The address that you’re supposed to use to pay off the malware writers doesn’t even work any more. This brings up a lot of very fraught questions about just who would have turned this software loose and why, but if anyone has any solid ideas about that, they’re not talking yet.

Ransomware is very bad news, but a deliberate wiper is the worst news possible. I am no expert myself, but it would appear that the hard drives that have gotten the full dose of this version of Petya are now full of information that may well be irretrievable. I very much hope I’m wrong about that, because, well, let’s get back to Merck. How bad is the situation there? The company’s most recent update was Wednesday morning, and it just said that they had indeed been hit, and that they believe that they’ve contained the problem and are working on recovery plans. Fair enough, but the earlier reports from actual Merck employees were quite alarming, and you’d have to think, given the recent news, that the company will have to figure out how much information is now lost. There is also, one assumes, a lot of very heated discussion about why the company’s Windows systems were not (yet) patched against this vulnerability.

The scope of the malware infection, the state of their backup infrastructure, the data at risk – all of these are known only to Merck, and perhaps not yet even to them. There’s a lot of really, really valuable information sitting on the internal servers of a drug company, much of it irreplaceable. How much of Merck’s is gone?

Update 1: Arsalan Arif at Endpoints says that they’ve been unable to get any comment from Merck today.

Update 2: I’ve heard from Merck themselves, with their latest release on this. They say that they “see no indication that the company’s data have been compromised”, which is very good news, and that “Government authorities working with us have confirmed that the malware responsible for the attack contained a unique combination of characteristics that enabled it to infect company systems despite installation of recent software patches.” That part is not such good news, and should be worrisome for IT departments everywhere until it becomes more clear just what systems were infected and what patches had been installed. It may even be worrisome after that.

58 comments on “The Merck Malware Attack”

  1. Philip Skinner says:

    I’m not sure if this is true at Merck, but there have been a few opinion pieces around recently about how healthcare providers are more at risk as their validated environments impede the application of patches, security updates and OS updates. A good case of the law of unintended consequences.

  2. Wavefunction says:

    Similar to the Japanese incident, they also recovered an Enigma machine from a captured U-boat in 1941 that allowed them to break the immensely important naval code. Funnily enough, the Krigesmarine did not suspect for a long time that the code had been broken, even though the convoys were clearly being rerouted.

    1. History Pedant says:

      Sorry to be a pedant but Bletchley Park had some machines before 1941. It was the German naval code books (captured by HMS Bulldog’s crew from U-110) that were crucial as the Navy had a much more organised way of encoding its transitions, unlike the other Enigma using organisations like Luftwaffe.

      1. cthulhu says:

        There was also a period of time when the Kriegsmarine Enigma machines had one more rotor than the other users of Enigma, and the Bletchley Park code breakers were SOL until one of the naval Enigmas were captured so that the bombes could be reworked.

    2. Rich Rostrom says:

      The Germans never realized that Enigma had been broken (aside from captured keys, which would only be good for a few weeks).

  3. Chrispy says:

    Scientific institutions are particularly at risk for attacks like this because they often run software that requires old operating systems. We’ve taken some of our HPLC equipment offline because automatic updates had a bad habit of crippling overnight runs.

    In the end, the blame really has to fall on Microsoft and the chumps like us that continue to rely upon them for operating systems.

    1. anoano says:

      Well, all OS have security updates, if people don’t do them, then machines shouldn’t be connected to internet.
      And people should be careful when opening files (kind of trojan horse delivered by someone not careful).

    2. Anonymous says:

      What’s Microsoft supposed to do, force them to update at gunpoint?

      1. Vader says:

        The problem is that Microsoft *is* effectively forcing them to update at gunpoint, via update software that basically takes over the machine for a good stretch of time. Hence, the users disable updates.

        I went to Ubuntu years ago. It has frequent security updates, but they dont’ take over the machine. In fact, they don’t get installed until you click the button, which I do at my convenience.

        There is also some evidence that Unix-like systems are inherently more secure than Windows. Perhaps this is because security was built into Unix from the very start (because it was always meant as a multiuser system) while security was an afterthought in Windows (because it was built on DOS, which was build for single-user systems not connected to the Internet.)

        1. Bill says:

          ” In fact, they don’t get installed until you click the button, which I do at my convenience.”

          And that would be why microsoft forces installs upon people, because they have a bad habit of not updating otherwise.

    3. fajensen says:

      Not that I like Windows, but, If the IT operations team are competent people and they have a decent system management setup for the job, then it is absolutely possible to run a very tight ship with Windows also, at least from from Windows 7 pro and up.

      Windows versions less than 7 pro is better not exposed to any networking, probably not a bad idea to have separate VLANS for every one such machine, given that the Petya was looking for machines on the local network segment to “manage” using credentials grabbed from memory.

      Microsoft AppLocker will block most malware and adware from running at all, provided the operations management team have configured it correctly. Instead of allowing users to run as admin on their machines, because they will need to have odd applications on them and so on, they can instead have virtual machines to be the admin off.

      https://docs.microsoft.com/en-us/windows/device-security/applocker/applocker-overview

      Usually the actual problem is that IT is seen as a cost-center, something that is funded and staffed because one absolutely must, but with zero love, care, planning, career planning and attention wasted on except when the shit hits the fan. Then one of course gets monkeys for staff, crap infrastructure, Dropbox and private mails everywhere and after a while the staff knows that the only way to get new kit is to have a disaster ….

  4. Isidore says:

    This can be a serious problem with Windows machines connected to instruments, since versions of instrument control software do not keep up and are often incompatible with certain OS upgrades, not to mention older instruments that are controlled by PCs running previous Windows versions. Taking such machines off the network is the only solution, but this prevents data backups.

    1. Igor says:

      There are ways to lock down and isolate instrument controllers so that they can deliver data to appropriate network destinations while being almost completely shielded from the broader network. I was part of an effort to do so at a big pharma nearly fifteen years ago, and it amazes me how rarely this is done, especially at smaller organizations. Academic laboratories are particularly vulnerable.

      1. dstar says:

        That’s because this sort of thing is almost always done manually, and is thus excruciatingly painful.

        I’ve been working in automation for the last… wow, decade, more or less, simply because I can’t stand doing the same thing over and over again, and thus slowly drifted from administration into automation.

        Pro-tip: If [thing your IT department does] isn’t automated, _you’re doing it wrong_. The only exceptions are things where there is _literally_ nothing the same between each time you do it… and even then you can probably break it down to no more than half a dozen basic ‘things’ which just need two or three parameters.

    2. Lloyd T J Evans says:

      A standalone instrument-controlling computer can still be backed up locally, with an external hard drive for example. Or a small network attached storage device. This could be set up with a direct connection to the standalone computer for backing up data. Then just have a read-only connection between the NAS and the main computer network. That way, the network only has access to the data on the NAS, but can’t affect the operating system or data on the instrument computer in any way.

  5. Not an IT guy says:

    Whether or not data is encrypted, deleted, or HD rendered inoperable…it sounds like HD replacement or reformatting of laptops and lab computers is required, a feat with 60k employees.

    1. NotAChemist says:

      “Whether or not data is encrypted, deleted, or HD rendered inoperable…it sounds like HD replacement or reformatting of laptops and lab computers is required, a feat with 60k employees.”

      It’s a pain to do, but with a good desktop architecture it is not a big deal. Insert a CD, reboot from it, rebuild the PC, remount the network drive that contains the user data (restored from a versioned backup) and you are good to go. Doing it thousands of times will require someone to sign off on overtime, but it is doable.

      The folks who stored data on C: instead of the networked, backed-up, personal drive? They lose.

      When you have tens of thousands of employees, recovering from trashed PCs is an every day thing.

      The key items are storing all important data on network drives (the user’s job) and having versioned backups of them (the IT department’s job).

  6. metals wrangler says:

    Well, I just read a piece that the end user is, and always has been, the weakest link. So far AFAIK my company hasn’t been hit.

    1. Shazbot says:

      That doesn’t seem to be the issue with this attack. The Register has it being spread through a compromised web service, with nothing related to the users involved in it. Certain specific precautions could have prevented it’s spread, but the issue does not seem to be the local users this once, but certain mistakes made with domain administration setups.

  7. cynical1 says:

    There was a Microsoft patch back in March – as part of their updates – as far as I know, that protected PCs from both this new attack and the Wanna Cry ransomware attack. Both use the same software fault to gain access from what I have read. Is this information wrong?

    Why is that Merck can’t afford antivirus software if they are unwilling to do a Windows Update?

    1. Isidore says:

      Apparently it is. As per various IT people the Windows patch that protects PCs from WannaCry (obviously installed after the fact in that case) also protects from Petya. If this is indeed the case then the Merck IT people (and those at other affected/infected companies) should see greatly reduced bonuses this year.

      1. tangent says:

        The usual way it goes is that IT advocates for patching and other policy to keep machines safe and consistent (and under their control). The business side of things fight against this, because it interferes with their work (and their control of their systems).

        Usually the business people, who make money, will win this fight. The IT people in the trenches know exactly what risks are being run. At some point up the management chain, the urgency is lost — those techies are always griping about how things aren’t perfect. And it’s hard to get praise or visibility for the lack of a disaster.

        1. dstar says:

          Depends. I work for $wedofinancialstuff.

          Your desktop _will_ be patched within two weeks of the day the patch was released (assuming the security guys don’t blacklist the patch because it breaks everything, of course), or it will be forcibly patched.

          Your windows server _WILL_ be no more than one month behind, or Questions Will Be Asked. And, IIRC, those questions will be asked starting at, at a minimum, your manager’s level. And going up from there.

          The security guys can be a pain in the rear, and I say this as someone who understands that _that’s their job_.

          But telling them to go jump in a lake requires that people at the VP level, at a minimum, put their job on the line, if things go bad enough.

          There’s a _reason_ I like where I work.

      2. Patrick Star says:

        Apparently this one mostly spread due to poorly configured / insecure internal networks.
        It did also exploit the same vulnerability as WannaCry when possible, but this was not the primary route of infection.

  8. Earl Boebert says:

    This is the consequence of a large percentage IT management, from vendor to user, adopting moral hazard as a business model, which means a whole lot of innocent people and organizations get screwed.

    Attack is going to dominate defense for the foreseeable future. Patch and pray will never keep up with zero day exploits. Spear phishing attacks have put all your employees on the attack surface. One disgruntled employee who looks at a suspicious email and thinks “This’ll fix those jerks” while clicking on a link and you’re toast. State-sponsored and state-tolerated attack organizations will apply impressive resources magnified by the fact that attack is an adventure while defense is just a job.

    Wise organizations will recognize this event of a portent of greater storms to come, and will invest in remediation, disconnect everything possible from the internet, and fire all the special snowflakes of whatever grade who whine about how inconvenient this all is. Unwise organizations are at risk of being nuked.

    1. Anonymous says:

      Or they could invest in more frequent backups.

      1. Earl Boebert says:

        Which is generally considered a remediation step, something I should have made clear.

    2. Steven Aston says:

      This wasn’t even a zero day, Petya uses a leaked NSA exploit ETERNALBLUE that they’ve sat on for years and very well could’ve been leaning on Microsoft NOT to fix. They finally patched it recently when the Shadow Brokers leaked the exploit a couple months ago but obviously a lot of folks don’t have the patch.

  9. cynical1 says:

    ..or you could do a Windows Update?

  10. JIA says:

    I think you are all being too hard on Merck. They may well have had up-to-date systems. Based on news reporting, it seems this latest attack could infect computers that had been properly patched.

    From today’s NYTimes: https://www.nytimes.com/2017/06/28/technology/ransomware-nsa-hacking-tools.html

    “The so-called ransomware that gained the most attention in the Ukraine attack is believed to have been a smoke screen for a deeper assault aimed at destroying victims’ computers entirely. And while WannaCry had a kill switch that was used to contain it, the attackers hitting Ukraine made sure there was no such mechanism. They also ensured that their code could infect computers that had received software patches intended to protect them.”

    1. Patrick says:

      Being hard on Merck is exactly the right thing to be here. The other means it used for spreading apart from the WannaCry vulnerability (ETERNALBLUE) was through common misconfigurations of networks. They might have patched every known vulnerability in existence but then didn’t follow basic security principles that have been well-known and established for literally decades.

  11. GhostofGoldblum says:

    To me the real question is not “how did so many computers get infected and lose data” but “why was the data only stored locally on one computer?” These days, standard backup procedure for important data should include the computer it is on, a drop box or other cloud storage solution, a network drive, and at least one external HDD that is unplugged when not in the process of backing up data (ideally two). It may seem excessive but if your computer is compromised and infects your dropbox and your company’s main server, you will suddenly be very glad you had freestanding unplugged HDDs.

    1. Phil says:

      Software engineer says +1 for this, excepting the local HDD thing – the malware will encrypt any local disk (without telling you, of course). Versioned backups are the only way to protect against Day 0/phishing/stuff that gets through leaky firewalls etc.

  12. JB says:

    Major institutions should completely ban the use of thumb drives. There should also be licenses required to use the internet- we require a basic level of knowledge in order to operate a car in order to prevent catastrophic injuries and loss, yet we don’t require the same type of knowledge when using the intern when all it takes is some idiot to click on a phishing link that says they have $1 million in Nigeria waiting for them and for that action to complete cripple a large company, causing hundreds of millions dollars in damage.

    Also, why are terminals with extremely valuable data even connected to the internet? If it needs software update/patch, let IT take care of it. Limit the number of terminals that have direct access to the internet. Other computers can be connect to each other for email with a local intranet.

    1. anonymous this time I think... says:

      Who would administer this internet license scheme? The US government? And how could it work when there are over two billion smartphone users worldwide? What should happen is institutional IT policies, induction and training. But different organisations have different amounts of emphasis on this.
      As for thumb drives… Kings College London last year had a meltdown of their centralised storage and backup systems, only people who had unofficial backups escaped unscathed. There is an argument for defence in depth. My own department at a different place is currently experiencing a (non-malware) system hiccough, I know at least one person who is able to work because they had their data on a USB stick. Do need to pay attention to data security with this kind of thing, but there are such things as encrypted USB drives, for example iron key.
      The notPetya attack seems to have some interesting features, for those following it on IT sites like theregister. While it does use some of the NSA suite of exploits it adds some new ones, and was apparently distributed *through an automatic update* to some accounting software. So Merck may well have applied recent patches and still got hit, or not, if they had then they weren’t running with a 100% best practice administrative setup. However locking down all the organisation’s computers doesn’t mean they were widely affected. A fortnight ago University College London was hit by a malware attack (again, through a newer attack vector), lots of IT services were shut down to contain the damage, in the end only 12 users were affected, out of tens of thousands. Fast action, while dramatic, is useful to reduce overall impact.

  13. David Stone says:

    Just to add an update on the malware itself: latest word from security companies is that it appears not to be ransomware at all. The code contains no means of decrypting the data it encryptes (which is the critical section of the disk outlining where all the files are). So, even if a company paid the ransom (and the sole email address for requesting the unlock key no longer works) they wouldn’t get the data back. Unless a company had backups that were themselves not similarly encrypted by the malware, the data is gone.

    1. metals wrangler says:

      Yep…it’s not true ransomware, but “brickware”, as in your machine becomes a brick

  14. Max says:

    Iiuc, the malware currently going around is not Petya, but is known as “NotPetya”. Unlike Petya, it seems that the authors’ intent was not to collect ransom (since there is no anonymous payment method).

    1. Handles says:

      Indeed, I have heard it called “Pnyetya” 🙂

      1. Pennpenn says:

        I have to wonder if the ‘P’ is silent?

        1. Cymantrene says:

          Like in Psmith?

  15. dearieme says:

    There’s an easy test of how seriously companies treat IT problems. How many Fortune 500 companies have a geek on the Board? Geeks may (usually) be the most infuriating of people, but to trust a company’s fortunes to computer systems and yet to fail to organise the company accordingly suggests to me that the executives have been nitwits.

    1. Olandese Volante says:

      The reason Fortune 500 companies tend not to have geeks on the board is the board of such companies being the last place a geek would want to be. Geeks tend to find people with MBAs *extremely* irritating. 😉

  16. Watson says:

    Taking instrument equipment off of the network is not effective if people and policies allow for contaminated drives to be attached (i.e. for electronic versions of printouts). In many cases, at least in the environment I was in, a student would plug in an infected USB drive and it would derail everything for several days. Back in those days, the only way to fight it required installing security software and manually updating with the weekly security updates from – yes, you guessed it – a USB drive.

  17. Anonymous says:

    Lots of comments, so I’ll TRY to be brief.

    1. I would like the hackers to leave my banks’ computers alone but please cripple the computers of my creditors. 🙂 Thank you!

    2. When will the hackers hit crucial infrastructure such as utilities (e.g., electricity generation and distribution)? The vulnerabilities of the US backbone have been discussed MANY times (60 Minutes; PBS; NY Times, etc.) for MANY years.

    3. I can remember centralized networked computing. A bunch of VAXes or other mainframes were maintained 24/7 by professional IT staff. Programs were installed on the central computers. If something went wrong, IT pros fixed it. THAT was their full time job. Users had DUMB TERMINALS (e.g., VT100, VT320, V420, Tektronics,…) on their desks and would run the programs they needed FROM the mainframe. Everyone had 3 disk areas: main (current; in-use or active files) workspace; backup space (for older projects and files with less frequent changes); archive (for huge files and stuff that doesn’t change much, e.g., database info; old project files).

    Backups (onto magnetic tape!) were almost continuous; something like every 15 minutes for the main (active) disk; every 4 hours for the backup disk; full backups of EVERYTHING every 24 hours. THE MOST WORK YOU WOULD LOSE IS 15 minutes from your currently active main workspace.

    4. Centralized computing like that has a LOT of advantages. The IT pros do the updates, debugging, maintenance and monitoring for suspicious activity. HOW MANY HOURS ARE WASTED BY INDIVIDUALS DOING MONTHLY or more than monthly UPDATES of their OS and programs? Even just 1 hour per month x 100 zillion personal computers = a lot of hours wasted.

    5. Cloud storage and cloud computing is not the same thing as the good old centralized computing. But, if you use Google Docs and so on, the software and updates are maintained by Google. Store in the cloud and google or dropbox or whoever has plenty of backups. And they would or should have the latest virus protection.

    6. My biotech MBA Officers refused to buy backup hardware for our personal computers and instruments. Our servers had tape backup but users were not using the server, just their desktops! I had to buy my own backup drive for my desktop Mac. Our NMR had tape backup but no other instruments had backup. When we shut down, the MBAs walked off with the biggest loot. I guess they were planning ahead and didn’t want to diminish the resources before shutting down and splitting things up.

    7. Is Linux (or Unix) more resistant to these attacks? The Morris Worm incident goes back to 1988. That made the unix community more aware and active about security.

    8. If you haven’t read it lately, look for the “If Microsoft made cars” websites. I think that OSs SHOULD be regulated to deliver some sort of minimum standard of security.

    If you can build a car that can go 200 miles per hour but the brakes only work up to 120 mph, you won’t be allowed to sell it to the public. Software (and hardware) always wants to deliver something faster, niftier, more profitable (must replace old OS by buying new version) BUT THEY CAN’T DELIVER SECURITY and safety for the majority of drivers = users.

    (The same idea applies to my cell phones and other technology. They break a lot of stuff that works in order to give me a gimmicky thing that I do not need. And then they can’t support all the new stuff that advertise, such as reliable networks!)

    9. Back to Merck: Merck does have some OTC brands but I don’t think they sell any headache remedies. This attack might have been perpetrated by Bayer to get Merck to buy up a lot of Bayer aspirin while they try to get their systems back up and running securely.

    1. Patrick Star says:

      All the major OSes are currently roughly equally sucky.
      Linux might have an advantage in that it’s easier to slim down to the bare minimal needed, and do things like custom hardening since you have the source, but plop in a standard Linux distribution and you’d be just as screwed if something like this was targeted at Linux instead of Windows.

    2. fajensen says:

      “””I think that OSs SHOULD be regulated to deliver some sort of minimum standard of security. “””
      I think that IT service provisioning and software development should be (a) regulated profession(s), requiring proof of competence, licensing and liability insurance.

      We require this from plumbers, I believe. Electricians too.

      Plumbing is a lot simpler, easier to understand and verify with lesser scope for screw-ups, that f.ex. writing a program to handle on-line credit card transactions. Despite the apparent simplicity of plumbing, so many screw-ups must have happened over the years that regulation was needed.

      With software, today any cowboy can just sit down and write code that will be affecting millions of people, for better or worse, no questions asked, no consequences applied. But said cowboy cannot install a water pipe or electrical outlet at his mums house because of previous experiences with that way of working.

      It is time we made software development and IT proper professions, I think.

      1. Olandese Volante says:

        You clearly don’t have the faintest idea how software is written these days.
        Hint: mission critical software, including operating systems and their components (aka “libraries”) are written by highly qualified professionals, and audited and tested by still more highly qualified professionals. However software today is extremely complex and interacts in extremely complex ways with other software. There will always be bugs that escape detection but these tend to be extremely subtle, and there will always be extremely convoluted corner cases where such bugs might manifest themselves.
        Certain application software might still be written by much less experienced programmers or even hobbyists (like myself) but unlike 20 years ago it is now nigh impossible for a badly written application to crash your machine, since all applications now run on top of system libraries that effectively isolate applications from vital system functions.

    3. Steven K says:

      As I was reading the article, I was thinking of ways to protect the data other than magnetic tape. Even with as little experience as I have (one year as an assistant in the modelling group of a med chem lab), I was thinking, at least on a UNIX system, you could create a cron job that only mounts a separate encrypted drive for backups then unmounts the filesystem once the backup is completed. You could also store system restore checkpoints on a separate drive. Since storage is relatively cheap now, you could even have two backups with one being on a hard drive you can just swap in for the infected one.

      Or even run the workstations using Qubes OS that sandboxes applications and prevents the malware from accessing as much of the computer. The working environment could be set to one based on Windows so standard users are more familiar with it.

  18. eyesoars says:

    A large number of the computers attacked (60%?) were in Ukraine, leading at least some to suspect that this is Russian infrastructure attack on the Ukraine.

    Anonymous: computing has moved on a great long ways since the heyday of the computing center. There are reasons for this. Some of them are even good reasons. In those days, networks were very limited, and the systems themselves were even more fragile than today’s computers, regardless of the professional/IT staff on duty — they typically supported proprietary operating systems to which they didn’t have source code, pretty much like today, if they’re running Windows; their ability to fix actual security problems was similarly limited.

    Labs and universities are often particularly susceptible to attack; it’s not rare to see bits and pieces of lab h/w running DOS, OS/2, Windows 95, and other antiques with known vulnerabilities that have no patches or fixes.

  19. Joshua says:

    “There is also, one assumes, a lot of very heated discussion about why the company’s Windows systems were not (yet) patched against this vulnerability.”

    This attack, unfortunately, was not exclusively reliant on the EternalBlue exploit used by WannaCry, and patched some time back by Microsoft. Initial distribution of the malware seems not to have been using that exploit at all, and it used a very potent combination of techniques to spread within local networks rapidly.

    If my organisation can afford to keep offline backups, so can Merck – and I’m sure they have and do. However in IT as in all professions we always need to make tradeoffs between protection/security, user impact/productivity, and cost. Merck will undoubtedly have lost some data irretrievably. My suspicion is that most of that data will be due to users not following company procedure and bending or breaking rules, rather than Merck not following good practice in their backup systems.

  20. Cytirps says:

    The sad part is that Merck can avoid all the trouble by just installing commercial anti-virus software

    1. fajensen says:

      The sadder part is that Merck then exposes itself to having all of its files and documents tracked and the contents eventually leaked by rogue TLA’s having lawful access to the antivirus software via PRISM (and whatever the Chinese and Russian versions are) ….

  21. tangent says:

    Not that we get to pick, but I’d disagree that “a deliberate wiper is the worst news possible.”

    The nice(?) thing about ransomware is you can ransom your high-value data. The bad thing is then you’re giving the bad guys an injection of motivation and resources — which they can use to buy a new zero-day vulnerability and build another attack.

    Destructive worms have been around off and on for decades, but ransomware worms are clearly on the rise, in the age of Bitcoin. Case in point: this worm sounds like it was ransomware code with the ransom taken off, i.e. if it hadn’t started life as ransomware, it might not have been built at all.

    A pure destroyer is maybe somewhat worse for today, but not as bad for tomorrow. Maybe we’d agree on that and are looking at the flip sides.

    I’d just say also, ransoming your data is of very limited value in practice (even aside from cost and ethics). How do we know they didn’t modify it before encrypting? Just for kicks, or for another payment, or to put attack code into executables. You have to go over all your spreadsheets with a fine-tooth comb, somehow.

    I hope those affected have backups, but they’re having a terrible time even so. Poor folks.

  22. David Stone says:

    Cyrtips: the destructive software at the heart of this current wave was NOT detected by existing AV software immediately, despite it sharing a lot of code in common with the Wannacry one that preceded it. Having up-to-date AV software isn’t a bad thing, but it’s in no sense any sort of guarantee that your computer is safe.

  23. Scott says:

    “Destructive worms have been around off and on for decades, but ransomware worms are clearly on the rise, in the age of Bitcoin.”

    What a lot of people don’t talk about is that Bitcoin (and every other cryptocurrency) are actually totally trackable. If you have the serial number of the bitcoin, you can track every single transaction that serial has EVER done, from the time it was ‘mined’ back in 1996 to today. Oh, and this trackability is part and parcel of the security/anticounterfeiting of cryptocurrencies. There is a way around it, but I expect it to get classed as money laundering and declared illegal (and I think it can still be worked around/through, with sufficient computational horsepower).

    So please, go ahead and use bitcoin or any other cryptocurrency for your ransomware. Just don’t be surprised when someone drops a hit squad of government goons on you.

    Sucks to be Merck, in this case, though. Hope nothing too critical got fried.

  24. G435S3D W4T34M3110N says:

    That’s why I always use incognito mode when I surf 4 chan all day when I pretend to do calculations

  25. Hap says:

    This, alas, seems relevant…

    https://twitter.com/kvanaren/status/885148257495724033

    With so many things to happen in drug development, I wouldn’t have figured that Merck might be unhorsed by a software virus. I guess it’s a Phase I success for a bugmaker (or a military)?

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.