Tuesday, June 10, 2008

The Pool profiled in Chronicle of Higher Education

Andrea Foster writes in the 30 May 2008 issue:

Re:Poste, a Web application that encourages academics to pick apart online articles from the mass media, is only in its infancy. But the program has already generated buzz on a social-networking Web site called the Pool....Re:Poste is one of 600 creative works — games, art, and more — by new-media students and faculty members, most of them on the Orono campus, described in the Pool, which also contains about 2,000 reviews of those works. Starting in June, the Pool will have a much wider reach, as people in general will be invited to add material to the site, rate others' projects, build on their ideas, and find collaborators for their own projects.

The Pool, as yet little known, could provide a new avenue for new-media scholars to do their jobs. Eventually it could play a role in their tenure and promotion as well.

The numbers and influence of such scholars in academe are growing, and they are looking for new ways for their institutions to evaluate them. Books and journal articles alone are a flawed measure of their productivity, new-media professors say, because many of their accomplishments exist only as Web sites, interactive games, or multimedia presentations. The Pool, they suggest, can be one measure for judging their work.

Most of the commentary in the article supports the idea that alternative recognition metrics can be useful for scholars of the Internet age. Gerard McKiernan of Iowa State says the Pool could be a good barometer of a scholar's influence:

"Five hundred heads is better than two in assessing the value of a work," says Mr. McKiernan, who runs the blog Scholarship 2.0, on alternative Web-based methods for scholarly publishing.

Richard Chait of the Harvard Graduate School of Education remains skeptical ("I don't know how you authenticate the value of Web-site hits or what people say on Web sites"). He seems to have missed the fact that all three innovations described in the article--The Pool, Re:Poste, and ThoughtMesh--are tools designed to inject trust into the wild and woolly world of online publishing.

jon

Sunday, May 04, 2008

Re: Exclusivity and Heresy | Alternative academic criteria

My take on parallels between control over curatorial and academic contexts--

Danny Butt wrote [New Media Curating list]:
>
>I can't help but think of homologies back to the idea of net.art as an
>attack on the gallery system. What I think has become clearer is that
>the role of curatorial practice, or the museum, or the publisher, is
>not merely that of gatekeeper as it is often conceived in the net.art
>imagination. It is also about the provision of context that is a
>critical aspect of the entire ecosystems of disciplines and practices.
I agree that curators provide context as well as gatekeeping, but if my fifteen years as a curator in a major museum is any indication, the ability to control the context is even more powerful than the ability to control who gets in the door.

Sure, there are some artists and curators mounting risky shows in alternative spaces. But as long as these efforts are evaluated according to the art market's prevailing hierarchy of value, they don't have much effect on the top of the pyramid.

This was precisely the value of Internet art--not just to produce and distribute art outside the museum, but to establish a different context that wasn't under the thumb of blue-chip gallerists and auctioneers.

Similarly, as Sean Cubitt mentioned, university research is increasingly evaluated according to a monolithic hierarchy that reduces each researcher to a numerical standing calculated from the number of refereed articles times the "rank" of each
journal. This rankism is a pitifully shallow view of the ecosystem required for critical or creative thought, and is one of the "impediments to new ideas and expression" that Roger Malina described crippling the contemporary university.

So how can academics nourish the ecosystem for new media research?

1. Publish early and often. The scientists are doing it (see Mitchell Waldrop's article in this month's Scientific American at

http://www.sciam.com/article.cfm?id=science-2-point-0&print=true). Some folks on this list have already published on ThoughtMesh (http://thoughtmesh.net), which will soon launch a "submesh" feature that emulates journal selections.

2. Negotiate each publication with your press. If the contract your press sends you doesn't explicitly allow you to self-archive your work, write it on the contract and fax or email it back. You'll be surprised at how flexible publishers can be.

3. Lobby your university to upgrade its promotion and tenure criteria for the 21st century. As mentioned elsewhere on this list, Leonardo has been quick to see the need to expand publication opportunities for scholars in the networked age; Leonardo
magazine will soon be publishing the guidelines for new media academics produced by Still Water at the University of Maine:

"New Criteria for New Media" (white paper)
http://newmedia.umaine.edu/interarchive/new_criteria_for_new_media.html

"Promotion and Tenure Guidelines" (sample redefined criteria)
http://newmedia.umaine.edu/interarchive/promotion_tenure_redefinitions.html

I've already received a half-dozen emails from folks hoping the publication of criteria like these will force their institutions to recognize the new forms of research birthed by digital media. If you have your own guidelines or want to contribute
to the conversation, please join the Leonardo Education Forum discussion at http://artsci.ucla.edu/LEF/node/104.

Cheers,

jon

Friday, March 07, 2008

"internet too fast for academia"?

http://blogs.telegraph.co.uk/technology/shanerichmond/feb08/thurman-ugc-study-is-flawed.htm

Does a two-and-a-half year publishing turnaround render studies of the Web dead on arrival? That's the claim argued in the following exchange over a study of the efficacy of reader comments in online journalism. Maybe Leonardo Transactions can help?

jon

* Posted by Shane Richmond on 28 Feb 2008 at 11:28
....Neil [Thurman] is an eminent academic and experienced in this field and I'm not suggesting that this study is without merit. However, many of the problems he highlights are not problems any more. Some of the problems we have now didn't exist
back then.

Does the internet move too fast for academia?

* seamusmccauley 28 Feb 2008 12:45
Shane - I emailed Neil about this, as I had the same concerns (our own cited interviewee likewise left the company more than two years ago). He was kind enough to share with me a far more up-to-date report. I've read it and it addresses some of the
issues you raise. Alas, I understand that academic publishing cycles mean the new report won't be out until September, when things will of course have moved on again.

Not Neil's fault, I hasten to add - he's doing some great work in this space, indeed some of the only really rigorous academic studies into the subject at all. But the academic publishing schedule he seems to be lumbered with does create these
considerable problems of relevance and timing. By the time his papers come out they are essentially recent histories of the web rather than investigations of the current state of the art.

* neilthurman 28 Feb 2008 14:26
As Seamus recognises, the "problem" you perceive regarding the length of time that has passed since the data was collected, is not of my making, but a result of the fact academics are leant on to publish in peer-reviewed journals (who demand
exclusivity) in order that they and their departments are rewarded--for example with income from the Research Assessment Exercise. Even though the journal that published this paper has recently increased its pagination and frequency, more than 17
months elapsed between acceptance and publication (and more than a year between submission and acceptance).

Some academic publishers are trying to speed up the publication cycle (via initiatives like Taylor and Francis' iFirst), although the promised improvements are "several weeks", rather than the months or even years required.

Tuesday, July 10, 2007

Web 2.0 means Recognition Metrics 2.0

This interesting policy change from one of the major Internet ratings companies reflects the way remote scripting has changed the way users interact with Web pages--jon.

Computerworld - New Web metric likely to hurt Google, help YouTube
In a nod to the success of emerging Web 2.0 technologies like AJAX and streaming media, one of the country's largest Internet benchmarking companies will no longer use page views as its primary metric for comparing sites.

Nielsen/NetRatings will announce Tuesday that it will immediately begin using total time spent by users of a site as its primary measurement metric....the change was prompted by a continuing increase in the use of AJAX, or Asynchronous JavaScript and XML, which allows a Web site to refresh content without reloading an entire page, and to the growing use of audio and video streaming.

"It is not that page views are irrelevant now, but they are a less accurate gauge of total site traffic and engagement," Ross said. "Total minutes is the most accurate gauge to compare between two sites. If [Web] 1.0 is full page refreshes for content, Web 2.0 is, 'How do I minimize page views and deliver content more seamlessly?'"

For example, he said, MySpace may have 10 to 11 times more page views than YouTube, but myspace.com users spend only three times more minutes on the site, Ross added. Therefore, measuring total time spent on a site will make it easier for advertisers to mold their ads to how users are actually accessing content, he said.

"On YouTube there will be more ads flowing in based on duration (on videos)," he said. "The more time I spend on YouTube ... [advertisers] will figure out a way to monetize that."

Monday, April 09, 2007

PowerPoint bad for brains

I couldn't resist sharing this link from Alain Depocas--jon.

Research at the University of NSW, Sydney, Australia, claims the human brain processes & retains more information if it is digested in either its verbal or written form, but not both at the same time. More of the passages would be understood &
retained if heard or read separately. "The use of the PowerPoint presentation has been a disaster," Professor Sweller said. "It should be ditched."

"It is effective to speak to a diagram, because it presents information in a different form. But it is not effective to speak the same words that are written, because it is putting too much load on the mind & decreases your ability to understand
what is being presented."

This new insight clearly puts the recent report about using Powerpoint in Parliament speeches in a new perspective.

[ http://infosthetics.com/archives/2007/04/powerpoint_bad_for_brains.html ]http://infosthetics.com/archives/2007/04/powerpoint_bad_for_brains.html

Friday, March 23, 2007

The h-index and its discontents

In a post to the OACI working group list, Peter Suber noted that the science index Scopus has begun to use Jorge Hirsch's "h-index" to supplement its other measurements of author impact. The h-index is a number calculated from the number of articles
an author has published at high citation levels.

http://www.econtentmag.com/Articles/ArticleReader.aspx?ArticleID=35680&CategoryID=17
http://en.wikipedia.org/wiki/Hirsch_number

In a simple demonstration of the weakness of such all-in-one numerical metrics, Eberhard Hilf of the Institute for Science Networking Oldenburg GmbH pointed out that physicist PW Higgs has a miniscule Hirsch Index of only 9, despite his having
predicted the famous Higgs boson. The existence of this massive particle would resolve some of the deepest uncertainties associated with elementary particle physics, which is why the physics community has spent millions of dollars building and
designing particle accelerators to find it.

Yet another case for more nuanced recognition metrics than a list of names with numbers next to them.

jon

Sunday, February 18, 2007

Infoworld interview with Peter Suber

Trebor Scholz alerted me to this audio interview with Open Access maven Peter Suber, which touches on both recognition metrics and the Interarchive agenda:

http://weblog.infoworld.com/udell/2006/08/18.html

I was especially interested to learn that Connotea (Nature's science-tagging system) can work hand-in-glove with ePrint (the most popular open access repository software). These sorts of metadata compatibilities make the holy grail of searching
across "dark archives" seem closer to our grasp.

jon