I'm not sure if this topic is substantive enough to be posted under Doug's new rules, but I'll try it.
Computer and hardware pioneer Don Lancaster (still going strong), in his blog on April 2, had an interesting take on the "Gresham's Law" effect that amateur internet posting is having on traditional publishing. For scientists:
There's been a lot of debate about the quality of amateur sources of information, like wikipedia. I think it is necessary to be careful about information quality in science. One of the reasons that refereed journals are so formal is that scientists have had centuries of experience with this problem. Left to their own devices, people think in a superstitious magical fashion and make huge mistakes in logic, causal relations, and statistics. You have to apply discipline, sanity checks by other experts, and the tools of mathematics to get objective knowledge that you can be confident in. It doesn't always work, but pseudo-science and amateurism almost never works.
There is also the issue of expertise. The physicist Richard Feynman frequently talked about how carelessly the word expert gets used. To understand a scientific field at the level of a master requires years of hard work, but people often seek to skip that annoying step and just become a respected "expert" instantly. So you get the cranks and pundits who flood the internet with poor quality information. You get someone like Steven Levy pontificating about computer technology with a degree in literary criticism...all to typical in the journalism of science. The BBC's Bill THompson has a masters degree in computer science, which is amazing. I often disagree with his political slant, but he has paid his dues and understands the science.
To see how difficult it can be to get a certain answer, just look at the serious question of Global Warming. Here is a problem that needs to be understood, but the issue has been politicized and muddled by big business on the one hand and leftist political opportunists on the other. How do we find out exactly what is happening, why it is happening and how confident people really are in various theories? It's virtually impossible, because even the peer review system has broken down due to strong political passions.
For folks who are interested in science, always try to go to the source. Something like space.com does a good job of monitoring news channels, but if a mission or discovery interests you, go to the source, try to find what the real scientists said. Something else I have found, and heard many other scientists say is, read the masters. You'd be amazed at how good some of the great scientists are at communicating their ideas and undestanding. For example, the Feynman Lectures in Physics, or Einstein's original papers. And if you're really passonate about it, go to college and learn to do it!
Wikipedia is okay for when I want to find out a song title from Ozzy Osbourne's back catalogue. For anything else, it's not.
Peer review is just like anything else -- no system ever works perfectly. My own experience has been that, any time reviewers have asked me to make changes to a manuscript, I've always grumbled about it; but once the changes get implemented, it's often led to a big increase in the paper's quality.
Wikipedia contains many fine articles and much healthy brain-food, but like so many rich and tasty concoctions it really ought to carry a health warning. You know the sort, I'm sure: May Contain Nuts!
I find it an invaluable resource, but I tend not to be seeking out the more outre material, and my opinions of any such that pollute my mental landscape are unlikely to be altered. I do worry, however, that it can be the subject of not-too-subtle manipulation at the hands of over-enthusiastic contributors, whose basic approach is not so much error-prone (all resources are full of errors!) as downright fraudulent. Perhaps the 'green-ink' brigade should see their postings reproduced in, er, green ink as a warning to the innocent (though that may imply a degree of control which Wikipedia wouldn't easily embrace).
Bob Shaw
Peer review is a tried and trusted mechanism that has proven itself to be the best that we've been able to come up with over the centuries but it has its flaws. Wikipedia may be too ephemeral to be a suitable paradigm for structured presentation of scientific ideas but it does demonstrate that there there are solutions possible now that can be used to improve on the peer review mechanism. For all its flaws it is remarkably resilient and self correcting - it is not something to be dismissed lightly in my view.
Peer review has failed pretty badly on a number of occassions. The incidents do not invalidate peer review at all but they do demonstrate some of the extreme weaknesses of the traditional journal model, many of which could be rectified using a Wiki model (say a wiki maintained by proven experts rather than Andrew Orlowski's spotty teenager).
Three quick examples that cover extremely important research in very high profile journals where the process would presumably be at its best.
http://news.bbc.co.uk/2/hi/asia-pacific/4554422.stm faking of human cloning data in Science, May 2005
http://www.drproctor.com/os/latimesschon.htm Both Science and Nature were deceived by him.
http://www.economist.com/science/displaystory.cfm?story_id=E1_NSNQNNG - statistical analysis of data in 32 Nature and 12 BMJ papers from 2001 showed that data had been falsified.
And one less serious but very significant example using, appropriately enough, the Wikipedia article on http://en.wikipedia.org/wiki/Sokal_affair
I know that we can play a game of board ping pong with bad Wikipedia examples to counter these and the examples would be valid but it should be possible to combine the best aspects of traditional peer review (use trusted proven experts as the reviewers) with the wiki model (encourage rapid release of data and active and healthy commenting\footnoting\reporting of corroborating\contradicting research\validation of test data\statistics).
It's hard for any system to deal with that kind of dishonestly, but they were caught, and probably not by a random non-expert on the internet.
People who try to go outside the formal structure of science like this are turning the clock back a few hundred years. They will encounter all the same problems that academics encountered, as they tried to develop quality controls for objective discourse.
Hi all, interesting discution.
My experience of peer review is poor, as I published only one science paper, and it was on a rather political-prone subject (that, in an economy system, when the agents behave in a somewhat altruistic way, the repartition of wealth can become much more equalitarian than with 100% egocentric agents). I submitted my paper to two referees... One simply mocked at me. The other asked me to remove the "philosophy" (what I did) before accepting me. But he let pass a serious mistake in a formula, that fortunately the first noticed to me.
So I am not convinced that the peer referee system is alway implemented correctly.
However I can only back what was said in several posts, Don, helvick, Bob, etc that the peer referee system results from centuries of experience, that it is the best known, despites its imperfections and defects. But however, even if a false paper can fool a peer review, a false theory cannot fool the whole system for a long time.
About volontary frauds, there are sometimes strong difficulties which arise. See in some examples:
-The Gupta affair. This guy distributed hundreds of fossils while falsifying their place of origin, leading to false conclusions into tens of however honest studies. It took years to be removed, as he had a power position into his country science organization: any national who denounced him was fired, and any foreigner who denounced him was dismissed as hostile to his country. The affair had to be solved at a politics/government level.
-The Ragnar Rylander affair. This "hygiena teacher" completelly invented studies showing that passive smoking was not dangerous. But he was paid by Phillips Morris for thirty years! In this example we have a vested industrial interest who acted franckly out of legality and far beyond any morals. The affair was brough to procecution and all these people condemned, thanks to the relentlessness of anti-tobacco activists.
-Sir Cyril Burt was the founder of Mensa (a club of people with high IQs) and a propagandist of the genetic transmission of intelligence. At his death his falsifications were found (about the IQs of separated true twin children). Cases of this sort lead today geneticians to be very politically correct about finding genes which would imply a difference between the races. No such gen was found until today (officially), but what if one is found?
What is interesting to note is that, even in the very difficut case of the Gupta affair, science won!!
If an error slips into a peer-reviewed article, it's there in print, so subsequent papers can point at it and correct the error.
I guess the editing history in wiki is similar, although wiki presents only 1 'truth', i.e. the final article.
Just to throw a couple more points in:
(1) I probably came down too hard on Wikipedia in my post a few days back. As Bob has pointed out, there's a lot of good stuff there, and Wikipedia is fantastic for "infotainment". But I would never, never, NEVER cite a Wikipedia article in a paper -- partly because of the indeterminate authorship, but also because the pages change over time which makes citation impossible by its very nature (unless you want to cite the version of a page from a specific date, which the reader would then have to look up).
It might, however, be okay to use the references cited in a Wikipedia article as a starting point.
(2) One of the most infuriating things about the current review process is that reviewers sometimes make no discernable attempt to understand the paper they are reviewing. The experience Richard has described above is hardly uncommon. For example, I have had a paper rejected because of a two-sentence review that dismissed the paper with pretty much no explanation of why the author didn't like it. Hey, thanks for the constructive criticism, buddy!
My feeling is that this speaks to a really major drawback of having reviewers be anonymous. Anonymous reviewers don't get credit for the service they have done to the academic community, but their careers *do* advance when they publish. Everyone wants to publish ten papers a year, but nobody wants to review twenty, and the result is that editors can't find people who are willing to read what others have written. Add to this the rush-rush-rush of "publish or perish", and the inevitable result is a *major* drop in quality.
I have had to send in two "Comments on..." papers to IEEE journals in the past couple of years to correct errors -- in peer-reviewed papers -- that could, in some cases, have been caught by a first-year calculus student. (That's not an exaggeration; check out my discussion in IEEE Transactions on Power Delivery from last year.)
If published papers had the names of the authors AND the reviewers on the title page, then maybe said reviewers wouldn't be so quick to pass judgment on a paper they had hardly looked at. The key would be to only include the names of reviewers who had recommended publication. That way, if the paper got accepted and turned out to be crap, some of the egg would end up on the faces of reviewers who had failed to catch the errors. On the other hand, if the paper turned out to be a classic, only those who recommended publication would get bragging rights about how their suggestions had helped make it that way. (Also, leaving out reviewers who recommended rejection would also help to avoid the "revenge" factor -- which is the main reason for having anonymity right now.)
This approach is nice to think about, but it will never be implemented... because it would expose all the "old boys' clubs".
Sorry for the length of this diatribe. It's actually short compared to some of the extended rants on the same topic I've embarked upon whilst at the U of Western Ontario Grad Pub. [Minor edits for various things.]
Rob, having the names of the reviewers in the first page of a paper, and a short comment of them, what a good idea. I back the idea.
By the way if you look at the http://ieeexplore.ieee.org/xpl/tocresult.jsp?isnumber=8549 you will find my name too. I was surprised, as I did not made the paper myself. But what I made was the experiment electronics, so it was kind from them to give the names of the technicians as co-authors of the study.
I completely agree with you about the emphasis placed on IQ, Don. The classical IQ test is a test of pattern recognition capability, in my opinion. Since I've always been good at pattern recognition, I usually score between 140 and 160 on IQ tests, depending on how they're organized.
But there are a lot of things that people I know, who have far lower IQ scores, can do that I'm just not good at, at all. I'm good at manipulating systems, but I don't have any artistic talent, that I know of. And while I can pronounce foreign languages quite competently, I have a very hard time thinking in them, and thus only ever get so far (the constant-mental-translation level) when I try to learn them.
I am a natural mimic, and a fair dialectician... maybe that qualifies as an artistic talent... ![]()
I think that pattern recognition is a talent, that some people have and others don't. It comes in handy when trying to figure out complex problems, in our real lives as well as in our distractions. But it's only one of the talents that come in handy -- you can be a very generous person, or a very shrewd one, or a very charismatic one, and have great success without what would commonly be considered "high intelligence" (i.e., well-developed pattern recognition skills).
And, like my father always told me, it takes all kinds...
-the other Doug
Interesting remarks, DonPMitchell and dvandorn.
I would just add that , in more of Mensa people, I also had to deal with ordinary people, non-cultivated people, and even people with somewhat low IQ. And I clearly prefer persons who have little intelligence, but who use it to be nice people, or who use it to understand life and happiness. Those people are far more useful to society that people who use a large intelligence to build weapons, or more subtly but more dangerous, to muddle simple questions (happiness, meaning of life, climate change...) and make of them complicated debates requiring constant sanity checks and debunking of many specious arguments.
Back to topic, it is true that modern communication means, and especially Internet, improved the ease of expressing ideas, and especially new or leading edge ideas, in every domains (science, unknown phenomena, philosophy, social, politics, environment, spirituality...). We have much more signals than, say, in the 17th century, and much better quality signals. But the terrible drawback which comes is that we have a much lower signal/noise ratio, as now everybody is able to publish any personnal though, theory, rant or nutter idea. We see this in the domain of this forum: there are many interesting discutions, but much more nutters, kooks, hoaglandites, guies who think we never went on the Moon, etc. And it is the same in EVERY DOMAIN!!!! For instance in spirituality we have sects, fundamentalists, nutters, etc. some being really nasty.
So, with my opinion, the only solution is something which more or less work like a peer referee system, formal or unformal:
-formal peer referee in reference science reviews
-unformal recognition of "what is best in the domain" when doing unformal work, personal sites, writing books, etc.
I would like to show what I did myself http://www.shedrupling.org/resource/base.php?lang=fr (dealing with subjects like environment, spirituality, unexplained phenomenon, happiness...)
-show a selection (often a narrow selection) of the best sources, sites, books... on a given subject
-shortly debunk common misunderstanding, or provide warnings if an however interesting resource arises a problem
-organize subjects by relation
-not visible, but important, I several times removed links to inaccurate or biased sources
This is a brief example of how to all together organize an unformal but accurate network of reliable knowledge. If everybody serious does the same, link pages of this kind will constitute a small but efficient network between knowledgeable/serious information sources, a bit like Wikipedia.
Perhaps Wikipedia will become some kind of standard, but that will depend of how they deal with controversies, who wins and is published on the Wiki, who works in the background to expell some and retain others. I don't examined in details, but in some instance they don't elude controversies (we cannot), just giving the various opinions. But there are controversies and controversies. "Is there life on Mars?" is a question nobody can answer today, so the various arguments must be given. But a question like "were the Apollo missions faked?" is even not worth mentioning it (unless we are specialists into denouncing false science or manipulations).
Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)