Skip to Content

More on Scientific Reports, And on Faked Papers

After seeing that horrible junk paper in Scientific Reports, and after some correspondence with people who’ve submitted to the journal and reviewed papers for it, there’s a question that I think needs to be asked. Has anyone ever had a paper rejected from this journal? They’re supposed to review for accuracy, not impact, but if that paper under discussion got through, then anything can get through. Let’s set the bar lower, then: has anyone ever even heard of any substantial revisions during its editorial process? The number of people saying that they’ve tried to review papers for SR only to find the paper published basically as it first came in makes a person wonder. At least they do seem to be sending papers out for review, although you’d certainly wonder who “reviewed” that last one.

Reader experiences with Scientific Reports are solicited in the comments, and comparisons of them with other open-access publishers such as PLOS ONE and Science Advances are welcome as well. I would like for there to be good open-access journals out there, but the Nature Publishing Group may have a real problem when it comes to this title. Put simply, if your editorial process lets papers like the one linked above through, then your editorial process is broken.

I certainly don’t want to reserve scorn just for the folks who published this one, though. At least PeerJ has put up an “expression of concern” on the paper from this group that came out with them in March. (All that the Retraction Watch folks were able to get from Scientific Reports was a statement that they didn’t comment on this sort of thing). But let’s not forget the “researchers” who wrote this stuff up. The publishers may be going only as far as saying they’re concerned, and I don’t think that Retraction Watch wants to use the word “fake” as long as the original authors are denying any misconduct. But I will.

Dr. Nima Samie of the University of Malaya: these papers, that you are a primary author on, are fakes. They are full of blatantly duplicated and manipulated images, which cannot by any stretch of the imagination show what they are purporting to show. My children started laughing when I showed them your papers’ figures – they wouldn’t be able to get away with pictures like these in front of their high school teachers. What’s more, Dr. Samie, your explanations as given here for why the cells in some of the images look like exact duplicates of each other are ridiculous. You need to realize these points, and what’s more, you need to realize that others realize them, too. You have some serious explaining to do by foisting these things off on the scientific community.

The editorial staffs of the journals who published these have plenty to explain as well. And there’s no reason to delay; let’s get right down to it. If the journals and the scientists involved are interested in clearing their good names, now’s the time. “No comment” (or just plain silence) is not an acceptable answer when things like this appear.

63 comments on “More on Scientific Reports, And on Faked Papers”

  1. John Thompson says:

    I have submitted one paper to Scientific Reports. It was reviewed by 2 capable peers who read it carefully and made useful, constructive comments. We made significant revisions and the paper was eventually published. The review component of the process was as it should be: good reviewers making useful comments. Unfortunately, the process deteriorated after that as the editor(s) proceeded to make a series of very picky requests for corrections that were made in a serial manner. One editorial/formatting request followed by a quick, simple change. Wait a week. Make another editorial/formatting request followed by a quick, simple change. Wait a week. Repeat until you are just about to withdraw the paper and submit elsewhere. So, my experience is different than you have reported but also frustrating. I have published in multiple other open access journals with review and publication results similar to paywalled journals: some good reviewers and some bad, some decisions I agree with and some I don’t. When the editors of any journal are doing a good job (focused on science), the results are good and can overcome bad reviewers (who will always be there). It seems like the editors at SciRep are more focused on the meaningless than the quality of what is published.

    PS You didn’t tell us there was math involved in posting a comment!

    1. Mike Snuk says:

      Had a similar experience – decent peer review and academic handling, but the editorial production team kept getting on our nerves requesting senseless formatting changes even before the manuscript got sent out for review.

      However, similar things happened to me with Elsevier journals. For our latest paper there they managed to botch the proofs twice, in spite of very diligent usage of their online proofing platform.

      Maybe there’s a shortage of people skilled in the technical aspects of publishing?

      1. Anonymous says:

        As a journal editor, I can offer some insight into the editorial and technical decisions involved in publishing. Most formatting revisions are done to either adjust the paper to match the journal’s style, or because the submission was not compatible with the typesetting or printing process (for example, low resolution figures, legibility, file conversion issues *cough ChemDraw cough*). If the proofs are botched, that’s usually an issue with the type setters, not the editors. The editors give the type setters instructions, but most of the time typesetting is outsourced to nonscientists. If you see something surprising in the proof, chances are the editors are also surprised and want to fix it as well.
        As for some of these catch-all journals, there are definitely some that take their role seriously. I can’t speak about how this particularly egregious paper was reviewed, but I can envision several scenarios where it could have slipped through.

    2. Nezar Al-hebshi says:

      I am having the same experience currently!

  2. Jeff G says:

    I have published papers in PLoSONE and the review process is about as rigorous as I’ve seen in other journals.

  3. luysii says:

    Along the lines of junk papers, here’s another angle on the 39% reproducibility rate of 100 papers submitted to ‘top psychology’ journals. It’s good that the impetus for reproducibility comes from psychologists themselves. https://luysii.wordpress.com/2016/06/12/reproducibility-and-its-discontents-2/

  4. Peter Kenny says:

    With apologies to Voltaire, “…il est bon d’humilier de temps en temps un éditeur pour encourager les autres”

    1. Descartes says:

      Alors, nous devons rendre grâce à Drug Discovery Today pour aider propager la merde sans rigueur, autrement connu comme PFI .

      1. Peter Kenny says:

        Drug Discovery Pravda?

  5. Ed says:

    As an academic editor of PLosOne. I reject about 50% papers based on reviewers’ comments.

    1. Ted says:

      As a viewer of academic editor comments, I won’t reject your statement, but I do request that you revise it for grammatical clarity.

      :^)

      -t

  6. Alissa says:

    I am middle author on a paper currently being revised for Scientific Reports. We received two reviews. One was almost a non-review and gave no real solid suggestions for improvements. The second was very thorough and the editor suggested major revisions which could only be based on this second review.

    1. Curious observer says:

      I reviewed once for SR. The work was OK but the initial presentation clearly had lots of (pluggable) gaps that needed to be plugged. Including myself there were three reviewers. The other two each gave a 1/4 page and a 1/2 page report that was too generic and contained no specific comments on the problematic sections, and how best to address them. I have little doubt that the unpolished version would have survived the initial round of review had I not submitted a request for major revision.

  7. SP says:

    I have reviewed about half a dozen papers for SRep and have rejected two.

    1. a says:

      Were the rejected papers published?

  8. Here are all the reviews I have received in the last few years, most in PeerJ and PLoS ONE, so you can judge for yourself: http://proteinsandwavefunctions.blogspot.dk/search/label/reviews

    I have been rejected twice by PLoS ONE

    1. hn says:

      We had a manuscript rejected by SR (for lack of impact, grrr!) and have another currently in revision after receiving useful comments. SR and other reputable OA journals publish decent work in my fields. Quality of review and editing is variable.

  9. elisa fadda says:

    I have reviewed 2 papers for Scientific Reports (SR) so far, rejected 1 and suggested major revisions for the 2nd. Both manuscripts were pretty weak. The final decision, communicated to me by the respective editors shortly after I submitted my review, was to reject paper 1 and major revision for the 2nd. Final decision on both manuscripts was based on 2 overall consistent reviews. I have never submitted to SR and based on this disgraceful event, I may not for a long while….

  10. Christophe Verlinde says:

    I refereed 2 papers for Sci Reports. Both were garbage and I recommended rejecting them, which the editor did. Later, one of the manuscripts appeared in print in RCS Advances.

    1. Dennis says:

      There is probably need for a review database that publishers can share if they are interested. But then are they interested?

  11. PJ says:

    I’ve been thinking a lot about this lately, and hope to hear some input from my fellow industry scientists.

    Much of the biology we do in Pharma is “fit-for-purpose,” and even the highest caliber work can stop abruptly when a project hits a no-go decision. These data packages often fall short of academic requirements for a complete, publishable “story,” leaving potentially useful data to collect dust on a long forgotten Sharepoint server for lack of time and resources. To be clear, I’m not talking about “least publishable units” and “salami publications like those from academia, though these certainly happen when industry scientists need to submit work before a layoff (sound familiar?). I’m mainly talking about data from pre-clinical studies in cells or animals that might add to the scientific literature in unexpected ways, but are scorned by editors for their lack of completeness.

    How do folks here get their good quality, “incomplete” work out there? Do you just give up after a poster presentation, or plow ahead until you find a compassionate editor at a peer-reviewed journal?

  12. Chrispy says:

    What happens when it is not possible to blind a clinical trial, like if a drug has observable side effects? (I understand the EGFR inhibitors cause rashes, for example.)

    If I was a cynical huckster could I just take a benign compound with a noticeable effect — something like niacin, that causes flushing — and push it forward in a placebo effect-troubled category like depression?

    1. dave w says:

      I guess you might need to find an otherwise-inert substance that has the side effects (but not the expected clinical activity) of the compound under test, and use that as the “control dose”…?

    2. NJBiologist says:

      Actually, diphenhydramine has been used in the placebo arm of antidepressant trials (especially when tricyclics were the norm, with their prominent anticholinergic/antihistamine effects)….

      1. BTC says:

        Huh, that’s good to know. I guess I should disclose my diphenhydramine allergy (ironic, I know) if I’m ever to volunteer for a pharma trial.

  13. Anon says:

    I’ve published once in PLOS One and reviewed one ms for the same journal. During publication of our paper the review process seemed fairly typical and stringent with the proviso that papers were reviewed for accuracy not impact.
    The ms I reviewed was sub-par and I recommended major revisions. The authors refused to do anything meaningful so I recommended rejection which the editor agreed with. It hasn’t appeared so I’m presuming the journal accepted that conclusion.

  14. DT Man says:

    I recently published what I felt was some of our stronger work in Scientific Reports, which seemed like a good home for it because there seems to be a lack of mid-tier IF journals in our field.

    The reviews were in depth, although we felt one of the three reviewers didn’t really get it (nothing new here). We went through 2 revisions and published the improved paper, although we did run into a few of the holdups due to minor formatting issues. I’ve had similar positive experiences with PLoS One.

    It’s very disappointing to see papers like Samie’s get through, there really is no excuse for the reviewers and editors on that one, and it brings down the currency of the journal for everyone else – editors need to factor that into decisions to take the cash for publishing a paper.

  15. Denis says:

    Once I submitted a paper to the reviewed ob/gyn journal and they told me they are not going to accept it, because I state, that vitamin D is a hormone. We had a long and unpleasant talk, but in the end they surrendered to Harrison’s Endocrinology.

    1. R.Pal says:

      I had a paper rejected by a journal becuase they did know that Vitamin D has a role in TB. IF an editor or reviewer do not know role of Vitmain D in various diseases they need to quit science and do somethign else.

  16. Ian says:

    I recently rejected a paper for Scientific Reports that was subsequently rejected on the basis of my review. The synthesis was poor, the compounds were frequent hitters, and the biology was unsurprisingly weak.

  17. Bryan says:

    eLife is probably among the best open-access journals. In my experience with the journal (n=1), reviews are fairly rigorous but pretty fair (I suspect that having the reviewers confer with each other to draft a single review keeps reviewer #3 from suggesting too many difficult experiments). More importantly, the review process is transparent as the reviewer comments and author response are published alongside the article. eLife probably benefits from its connections to various important funding agencies for funding and prestige, and they probably don’t have as big problems finding qualified reviewers as some of the lower-tier open access journals.

  18. Mr Docs says:

    I have published two manuscripts with SciReps. One manuscript was assigned to three different reviewers, which did carefully read our work providing very good feedback for improving the manuscript. The second accepted manuscript went through two cycles of revision before formal acceptance. In both occasion we had to perform additional experiments/extensive re-writing of the manuscripts to comply with reviewer requests. Myself I have been peer reviewing for SciReps and in three instances I have rejected manuscripts, which were subsequently rejected by the editors based on my (and other reviewers) comments.

  19. Jason says:

    Everyone can become a reviewer for SR. Poor reviews from non-qualified reviewers if any. The IF is much higher compared to impact/quality of papers due to the self citations from the authors.

  20. Albert says:

    Wow, This year till July SR already accepted more than 12000 articles…by this impressive speed they are going to cross 20k by the end of the year 😀

  21. Vik says:

    We just got a medical research paper accepted in Scientific Reports after rigorous peer review process. Our group has published in several journals, and this peer review process was at par if not more stringent. We had to perform additional experiments and substantially improve the manuscript before acceptance.
    I guess there could be some rotten apples in the group, but most papers accepted in Scientific reports are of good quality.

  22. Allen Pirvasti says:

    In my opinion when you are studying as a PHD STUDENT in a university, it is absolutely direct responsibility of the supervisors to check every angle of the work and here this responsibility has been truly forgotten, the student maybe guilty but the responsibility for supervisors is undeniable. I agree for his mistake but still he is not a PHD and he is working under supervision of his professors. so I think with screeming that first author is guilty nothing will solve but ruin his future life and career

  23. Sherif says:

    I have noticed that SciRep physics editors would allow for purely computational physics papers (DFT in particular) to slip through as long as they are thorough and deliver “interesting numbers”, such as “large spin-filtering efficiency”, “large recitification”, “enhanced adsorption energies”, etc. Being thorough is a good thing, but it is easy to be thorough in DFT and deliver loads of nonsense. Also, if one plays with the available DFT codes enough, then there is a large number of potential “interesting numbers” that can be generated, but lacking scientific relevance. I think that SciRep needs to filter pure-theory papers by picking only those that introduce new methods, or address serious scientific questions. One thing the editors can do this is to include a chemist/material scientist/nanotechnologist/etc in the list of referees for pure-theory physics papers. I would love to see that their IF matches with the quality of their papers. Well, forget about IF, it is starting to make less sense anyway.

  24. Neuroanonym says:

    I have published one paper in Scientific Reports, which received thorough and very careful reviews (one of them by a major scholar in the field; he signed his review), leading to substantial additional data analysis and re-writing. I have reviewed one paper for Scientific Reports, which I rejected (and it got rejected by the editor). The editorial process was slow, but overall, I have had a good experience, and I respect the journal for neuroscience as much as any other mid-tier Elsevier scientific journal.

  25. the banana says:

    I reviewed several manuscripts for Sci Rep (4-5) and one was resubmitted three times without any evidence the revisions were anyhow considered because “dont judge us on the novelty” uber-argument. Still kicked it back because it was not well prepared and sloppily written and let’s be honest JIF of around 5.xx is a high impact journal and should stand for quality and top 30% at least… Either you have ethics or not – I do…

    I think a lot also depends on what you expect from a journal – they invite experts based on their knowledge not on their time… And most Professors I guess cant review a paper for some hours and days to actually review it properly and check for references too (they have other things to do as well – grants, teching, networking, conferences, sleeping, chillin, making companies, etc).

    I agree that i cant see how some papers got through but question is who is at fault here:

    -authors? maybe – some should at least consider proof-reading. job ethics versus unhappy PI/Prof…
    -publishers? probably yes – they get money from authors & readers but dont pay reviewers – maybe a bad model for qualitative reviewing system. Liked the review system in Forntiers in… even though unpaid it took 9 Months from submission to acceptance, hard cookie, but a just and 100% transparent system (you can see who was editor and all the reviewers on the papers. awesome! would help to stop the pnas and nature buddy bonus system ;-)…)
    -reviewers? Invite experts (>=PhD) and pay them – then maybe even Professors would care about actually doing it properly (that’s basically how the reviewing system for many grants work and look at those acceptance rates ;-)…)

    my two cents only…

    cheers!

  26. SRE says:

    My paper was accepted after two major reviews. Both the reviews were constructive suggestions and criticisms. I am also a reviewer where one paper was rejected and another is going through a second revision. no doubt the journal is good. If some scrap went through the blame should only rest with the reviewers and not the journal itself.

  27. SAM says:

    This journal makes high an impression for fancy study. I think that after 2~3 years, this journal will be a huge idea tank journal without traditional obstacles.

  28. Eric says:

    I am currently publishing in Scientific Reports and the reviewing process was fair and constructive enough. I really feel that the paper is stronger now. You can also find bullshit papers in Nature, Science, Cell, eLife, etc. In those journals is nearly impossible to publish without a big name as corresponding author, which is against science nature

    1. FFZ says:

      I totally agree with you.

  29. Hiren Karathia says:

    I think your interpretation is totally incorrect. Today only I received a good news from Scientific Report that my manuscript is accepted. Now coming to your of so called “Faked Papers” in Scientific Reports. I totally disagree to your points towards the SR. There were 2 editors sent my manuscript with major revisions and with constructive comments. We updated those and sent a new manuscript back. Again it came back with single reviewer’s minor concerns, which we solved and sent it back. Third time, it came with the same reviewer’s minor comment, and the editor sent it back to us to address it. We addressed those and result is here today. At last, it got accepted. The total correspondence time was 9 months. So I do think that SR editor is genuinely take care of fair reviews.

    So, as an advice don’t generalize your view based on showing couple of case studies.

  30. FFZ says:

    I am afraid that your comments are a bit biased. You asked for personal experiences: I have two papers published in Scientific Reports, both were reviewed by adequate reviewers who wanted major changes and after we did so, the paper was published, total time around 3 months for the first one, 5 months for the second one. You may have catched one paper that was “fake” (in which sense? I did not get this.. but how many of them are in high impact Journals? Why nobody talks about this? I was shocked to read, some time ago, that JCI published a paper in 2015 whose main message was exactly the same as one paper published somewhere else in 2009, of course without citing the original paper. Taking a closer look, the conclusions, albeit the same, were based in some dubios experiments lacking the right controls. So the question is, isn´ t it always the same? If you know the Editors or the Editors picks the “right” Reviewers your paper will eventually get published. How is the objectivity of the Reviewers checked? It is not. The same set of experiments with the same results coming from a well-known University and/or researchers that have already published in high IF Journals have more chance to get published in whatever Journal than if coming from people from rather unknown or little Universities or people that did not publish in big Journals so far. And I could go on, if your country is China or Turkey or whatever your chances are way lower than if you are from the US or Europe. Same for the topic. If your topic is not “sexy” enough and the putative readership is rather small, nobody cares about the accuracy and the novelty of your results.
    I guess you need to make a more accurate analysis of the situation and pick the right statistic test to make your conclusion, then you can not state what you state based on same few cases. You will find them also in established Journals.
    As long as the peer review system does not change, the amount of crap, biased studies, biased reviewers, etc will increase. Publish the name of the reviewers, and the system will change. Pay for the reviews, and the system will change. We are all working for free for big publishers…Academic Editors, Reviewers. At some point you do not have the time and the capability to review the huge amount of papers going around w/o a change in the system.

    1. Cleo says:

      Well said !!

  31. DSX says:

    1. I have reviewed 1 paper for Scientific Reports. The manuscript has been rejected.
    2. I have published 1 manuscript in Scientific Reports. Our paper was assessed by 2 reviewers. Their comments were professional.

  32. ScieChris says:

    I am an editor myself and have also published in Scientific Reports, receiveing adequate and rigorous reviews by experts in the field, which requested additional controls and experiments. As in all journals, I guess it depends on the scientific field, the scientific ethics and stringency of the editors in any given field, and the quality of the peer reviewers. In general, in my field, I think SR is at least as rigorous during the review process as any other mid-tier journal, if not more so. In some scientific fields more than others, bad sheep may seem to slip through the fence, maybe there are less qualified reviewers available in some fields? And the pan-subject journals such as SR may have more fields with possible shortcomings than journals which are field-specific. Overall, I think SR is not to blame for the slip. The fact that it receives good numbers of citations for many articles and catches readers’ attention (=high IF) seems not to be due to an overly high number of self-citations but appears to indicate that the editors and review process is overall quite adequate.

  33. AA says:

    “open access” world also give boring, wicked people the nerves and opportunity to spread gossip, offensive criticism.
    A quick response to submission is not a sign of lousy job. It can be considered as being fair with the scientists.
    I prefer it on the procedure of BMC journals that keep an article for 4 months until they send the reviews, and then decide for you that as they do not believe your revision will be in one month – it is rejected. On the other hand, when the revised paper is accepted to hold you in the air, and forget to tell you for weeks. And when you ask the editorial what is going on then the respond is “O’yes, the paper is accepted”. And this is at every step until publication. The author has to remind himeself and wake up the system

  34. sim wer says:

    Pay to publish is a big big big tumor of the academic community. SciRep and many other OA journals put profit far far before scientifc quality. With the quick grouth, the scientific world must have an independt orgnization to restrain the quick growth of the profit oriented OA journals. Otherwise science will die with the high quality free journals. Not all researchers have the money or willingness to publish. And many are paying the money for something else far from scientific purpose, espcially in countries with a bad evaluation system for researchers. I’d like to ask, how many real breakthroughs are from paied-to-publish journals?

  35. RK says:

    I have submitted papers to both PLoS ONE and Scientific Reports. In both cases, the review process was quite rigorous and the critiques in the paper we sent out to PLoS ONE pointed out some serious errors which we corrected in a subsequent draft of the paper sent out a year later. This second submission was done to a traditional journal where more revisions were requested and finally the paper was published. In the case of Scientific Reports, revisions were requested (including new experiments), were duly completed by us and the paper was accepted in the second round. The editorial process was also quite pleasant. Overall, I am quite satisfied with my experience at both these journals. It is Frontiers, I am worried out as I regularly peer review there (not submitted anything there as of yet). I have noticed that the quality of the peer reviews from other reviewers (blinded from me initially) are horrifically lackluster.

  36. Oz says:

    I have published in PLoS ONE. I received insightful reviews, and, to address those, I needed to conduct additional (robustness) analyses and revise the manuscript significantly.

  37. vijay says:

    As far as my experience with a Korean professor. He chose the editor (professional friend) along with preferred reviewers, in turn, gets favorable/sloppy comments and a paper gets published. He publishes in high impact factor journals just for the sake of money. The reason is, he gets money from the university if he publishes in high impact journals.

  38. Mel says:

    I have reviewed three bio papers for SciRep and rejected two; both were then rejected by the editors in the first round. Most people argue that because it’s a Nature journal it is more prestigious than PlosOne or other similar OA journals. I don’t really buy this. Publishing in SciRep is obviously limited to those who got the cash to pay. People buy themselve in, reinforce the Impact Factor game (whilst complaining about it at the same time) and hope to win from associating their work with a Nature journal, although it is pretty obvious that this journal is Nature’s dumping ground for rejected papers. I suspect that Nature makes pretty good profit out of it.

  39. Animesh Ray says:

    I have submitted two manuscripts to Sci Reports. One was accepted after substantial reviews and revision. One was rejected after substantial reviews. We reworked the rejected manuscript on the basis of the reviews, then submitted to Nucleic Acids Research. After another round of substantial reviews and revision, the paper was accepted in NAR. So in my experience, it was as difficult to publish in Sci Reports as in another strong journal.

  40. Anonymous says:

    I will not read Scientific Reports – sure some great research may be published there, but I cannot (or have the time to) differentiate between the good and all the mediocre and research that should NEVER be published.

  41. Null says:

    The merits of having your work published in OA journals such as PLOS One and Sci Reports will obviously differ according to various research fields. There are certain fields with a large number of well-received field specific journals, and the OA mega-journals may be perceived more of a “dumping ground” in those fields.

    But in the field of paleontology (of which my work is involved in), having your work published in either PLOS One, Sci Reports, and even PeerJ is received pretty well in the community. And I was actually surprised on the very small number of paleontology papers published in Sci Reports.

    1. JC says:

      As someone involved in paleontology, I agree.
      Although Scientific Reports may be considered as a dumping ground for rejected papers from higher tier nature journals in other academic fields, it is actually quite hard to get a paleontology paper published in Scientific Reports. I actually had 2 submissions rejected from Scientific Reports,

  42. maria debellard says:

    I had a paper rejected based on one single spurious reviewer that did not even knew the topic of cell migration… a very disappointing experience. The Editor said that after almost two month could only find one reviewer.
    But then I had seen such poor quality being published there that it made me thin k that is just random and unscientific. They just want your money.

  43. I have published my research paper on a journal website and after few months I have seen my publish paper on different website. It’s very disappointing for me

  44. Sankalp says:

    I have reviewed twice papers for Sci. Rep. before and gave a recommendation for rejection. My criticisms were in line with those of very extensive reviews by other two reviewers for each manuscript, and the papers were finally rejected by the journal. With that said, I have noticed some clumsy papers on the journal.

  45. RMH says:

    Of course, it’s impossible to find the someone saying our low quality work was luckily published thanks to a poor review process of SR.

  46. Allen says:

    So it’s preferable to publish in old-boy journals like PNAS where you can get turd papers in by wining and dining members of the academy? And LOL, Science is the most political journal ever! (not sure what Science translational medicine is -nod to lifestyle drug pharma?). Or savior journals like that Nobel’s guy eLife where his best buds funded by the Pablo Escobar Medical Institutes publish.
    The only distinction for “respectable” journals really is more grant bling is featured … the number of miserable papers in the high metric journals is far from insignificant!

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.