Sunday, May 24, 2009

Concepts Assignment

Concept 28


“Advanced Internet users recognise the character of the Web, seek to utilise its advantages, ameliorate its deficiencies and understand that not all users have the same abilities as themselves in reconciling the paradox of the WWW.” (Allen n.d.)


Cloud computing seeks to take advantage of the paradox facing new users of computers and the WWW. Namely to make tasks and applications appear easy, when at a deeper level they are complex, requiring skill and knowledge.

What is cloud computing anyway? The ‘cloud’ refers to data centres that sell excess storage and computing power to individuals, businesses and governments to amortize the cost of setting up and running said data centre. Many users already avail themselves of these services without realizing the underlying cloud component and paying via their viewing of placed advertisements. Some of the common ‘clouds’ include, email (Gmail, Yahoo!, and Hotmail), search (Google, Yahoo! And MSN), applications (Google docs, Mobile Me, Evernote), storage/distribution (Flickr, YouTube) and communities (Facebook, MySpace) . In fact according to a recent survey by Pew Internet and the American Life project, 69% of users are already taking advantage of the ‘clouds’ (Horrigan 2008).

In the tasks undertaken in this unit, the ‘cloud’ most accessed was Google, and this ‘cloud’ made the initial searching required very easy, belying the complex functions being undertaken on this users behalf by the ‘cloud’. As the unit progressed and advanced searching techniques were introduced, it became apparent that the use of the ‘cloud’, whilst beneficial initially, had encouraged a lazy use and less understanding of the underlying concepts underpinning the WWW.

However an advanced user, aware of the ‘cloud’ and its implications, would be able to use the ‘cloud’ services where appropriate (or convenient) and be aware of the downside and other more complex but better tools and resources available. Advanced users also have a social responsibility to signpost the way and help educate novice users about the implications of always taking the “easy” way. Educating and assisting novice users to partake of the deeper web and look for diversity in resources help to advance the Web by way of diversity and decentralization.

So are the gathering ‘clouds’, good or bad in relation to the inherent nature of the Web, being decentralized, innovative and complex? I would argue that in relation to innovation, diversity and decentralisation, ‘clouds’ are inherently bad, as first and foremost, they centralise data and applications, forcing users into proprietary systems. Innovation is stifled as users do not have to deal with information in a variety of formats. It also subverts the freedom of the internet by raising the possibility of corporations or governments having increased control over the data stored and manipulated by these data centres.

However these bad outcomes may be offset or canceled out altogether by the benefits to universities and business being able to effect better outcomes due to the economies delivered by the ‘clouds’. If the bigger players lay down an ethical and fair playing field then the providers may resist the temptation to ‘control’ the data they store, or to push users into ever more expensive proprietary systems.

‘Clouds’ also offer the major benefit of mobility of data and browser based applications, making using the Web easier and more accessible. The trade off, of more new users that contribute to the web, versus the homogenisation and centralisation of the Web, illustrates another Web paradox.

Annotated Sites:

Site 1: University of California, Berkeley. Reliable Distributed Systems Laboratory. http://radlab.cs.berkeley.edu/

Cutting edge use of cloud computing. This site is from a major U.S. university with backing from internet heavyweights, Google, Microsoft and Sun Microsystems. Their mission statement reads something like, “We want to enable an individual to invent and run the next big IT service.”

Although not a reference site I felt it was important because of their mission statement. Because the ‘cloud’ will centralize data and services, I think it is the universities especially that will provide 'cloud' service providers and users with examples of an ethical approach and one that promotes diversity and innovation. So even if these guys are promoting their ‘cloud’ vision, they are also enabling individuals to provide new and unique services.


Site 2: First Monday. http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/index

Peer reviewed internet journal. Clean and clear layout and navigation, with open access for readers and contributors. This site is the future of journals if we are to follow a model of social value vs market value in disseminating academic information. Also has a great tool in allowing references to be imported straight into Endnote. Being peer reviewed, the papers are of high quality. First Monday also allows access to their latest papers free of charge. The website is provided by the University of Illinois and allows users to register for other journals within the university.

References:

Allen, M. (n.d.). "Internet Communications Concepts Document." Net 11 Internet Communications Retrieved 20 May, 2009, from http://lms.curtin.edu.au/webapps/portal/.

Horrigan, J. (2008). "Use of cloud computing applications and services." Retrieved 20 May, 2009, from http://pewinternet.org/Reports/2008/Use-of-Cloud-Computing-Applications-and-Services.aspx.

Additional sources:

Jaeger, P., J. Lin, et al. (2009, 10 May 2009). "Where is the cloud? Geography, economics, environment, and jurisdiction in cloud computing." First Monday Retrieved 20 May, 2009, from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/2456/2171.




Concept 11

“Advanced Internet users learn to intuitively conceive of any document, file, message or communication as consisting of metadata and data. They then can explore the functions of various communications/information software looking for how that software can assist them in using metadata to enable sorting, processing or otherwise dealing with that data.” (Allen n.d.)

With the mind-bending amount of data on the web, a system to describe the data in a container extensively is required. This is the job of metadata, literally data about data. Like the familiar Dewey Decimal Classification used in libraries, the purpose of metadata is to help locate appropriate resources. However metadata goes further than conventional systems by allowing for a lot more information to be connected to a resource. The Dublin Core system allows for fifteen metadata elements that describe in three groups the resource’s; content, intellectual property and instantiation.


At its most basic level shareable metadata should be human understandable. (Shreeves, Riley et al. 2006). While completing the Net11 unit tasks and preparing to do the referencing for the concepts assignment, del.icio.us proved to be a great tool. Through the use of its keyword tagging and notes metadata, bookmarks are more easily searched and put into context. This not only speeds up referencing, but provides ‘breadcrumbs’ to bring a mind back to where it was when creating the bookmark.


We are seeing an evolution from the current Web of documents towards a Web of linked data and the broad benefits this brings. (Hall, De Roure et al. 2009) To facilitate this new Semantic web, we need intelligent agents and bots to aggregate the metadata and this will require the big data collections to be more careful with their use of metadata. Rather than stuffing a single record in say the Dublin core schema with all the information relating to a piece of data, several more individually sensible records should be created. This is with the aim of making the most of the data aggregator bots that create metadata collections.


Carl Lagoze has argued that “metadata is not monolithic ... it is helpful to think of metadata as multiple views that can be projected from a single information object” (Lagoze, 2001). In photography there is an increasing use of metadata in editing images, not just defining them. When you edit an image in its Raw state, you change the attributes of its associated xml file but leave the image data untouched. This means you have one base file and then another file that act like a filter that you perceive the image through, allowing totally non destructive editing, but perhaps more importantly means a big saving in library size. Will we have a web of base files with intelligent filters that we view this base information through, then edit and append to suit our purposes without editing the base file?


Site 1: http://dublincore.org/

This is the website of the Dublin Core initiative, a not for profit organisation, helping develop open standards for interoperable metadata. A great resource site for all things metadata. There are tools for adding metadata to web pages, a firefox plugin for viewing embedded metadata in HTML and XHTML documents. There is plenty of information about current schemas and upcoming proposals.


Site 2: http://www.oaister.org/about.html

This site is the result of the move to a more semantic web. It uses the Open Archives Initiative Protocol for Metadata Harvesting to find data in the deep web, hidden behind scripts in institutional databases. As more organizations come on board, this will become a very powerful resource, maybe Oaister is the new Google.

References

Dublin Core (2009)

Retrieved May 20, 2009 from http://dublincore.org/

Hall, W., D. De Roure, et al. (2009). "The evolution of the Web and implications for eResearch." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367(1890): 991-1001.

Lagoze, Carl 2001. “Keeping Dublin Core Simple: Cross Domain Discovery or Resource Description?” D–Lib Magazine, volume 7, number 1 (January), at http://dlib.anu.edu.au/dlib/january01/lagoze/01lagoze.html, accessed 18 May 2009

Oaister (2009)

Retrieved May 20, 2009

Shreeves, S., J. Riley, et al. (2006) "Moving towards shareable metadata." First Monday Retrieved 18 April 2009, from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1386/1304




Concept 8

“The daily practice of electronic communication is shaped by over-familiarity with one's own computer system, and a tendency to assume that – as with much more established forms of communication – everyone is operating within compatible and similar systems. When in doubt, seek to communicate in ways that are readable and effective for all users, regardless of their particular systems.” (Allen, n.d.)

Web design should, in theory, be about making a site as accessible and useful to as many people as possible. Countless dollars and time are spent on corporate websites meant to improve the publics awareness of a brand yet frequently these site are not accessible to people who are disabled. Designers and administrators have many tools and resources available to them to enable their sites to comply with access guidelines, yet an animated splash page seems to be more important than reaching there full potential market.


A diverse range of people of all ages and ability access the web. Levels of ability include physical, mental, education, available technology and socio-economic factors. Designers and administrators should be conscious of these differing levels of ability and take these possibilities into account when planning and executing web deployments. “The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect” (Berners-Lee, 1997)

The BBC’s Ouch! Website is an exemplary example of site design that looks good, uses multimedia, but jumps through all the accessibility hoops. It has options for changing font size and colours site wide (using cookies and css) so you don’t destroy the layout, there is a text only version for Lynx browsers and you can navigate it without a mouse, simple stuff but recent studies point out that large percentages of sites, (70-98%, depending on the category of site) are not accessible. (Lazar et al,. 2004)

Accessibility of an organisation’s website, not only for the physically and mentally disabled, but for the elderly and uneducated, improves the brand. It also improves the overall internet experience when it is standard practice, because for a site to be fully accessible and complying with the W3C/WAI standards, it employs strict xhtml and css. If for this reason alone it would be good practice for all web designers to strive for full accessibility in the sites they design.


Web 2.0 leader Facebook is actively working with the American Foundation for the Blind (AFB) to improve it social networking service, as according to the AFB’s statistics, there are 20 million Americans who have reported significant vision loss. (Wauters, 2009) The other Web 2.0 (and possibly the Semantic Web) leader Google, is also contributing with research and implementation of accessible design for its basic search engine and other services.

Site 1:

http://www.w3.org/WAI/

This is the W3C’s initiative to develop guidelines for Web accessibility, as well as provide support materials and resources to help understand and implement Web accessibility. Extensive information about the existing standards and the working papers for the standards for tomorrow as well as links to all the important resources neede to implement accessible web design.

Site 2:

http://www.bbc.co.uk/ouch/

One of the best website designed for accessibility that’s looks and functions exactly as we would expect a “normal” site to function. Proves the point that you do not have to sacrifice design to achieve accessibility. Uses multimedia including podcasts and images, blogs and message boards, but has alternatives far users who need others options to use the site and access its information.

References:

Berners-Lee, Tim (1997) on the launch of the International Program Office for Web Accessibility Initiative - http://www.w3.org/Press/IPO-announce

Lazar, Jonathon., Dudley-Sponaugle, Alfreda., Greenidge, Kisha-Dawn., (2004). Improving web accessibility: a study of webmaster perceptions. Computers in Human Behaviour 20, pages 269-288. Retrieved May 18, 2009 from www.elsevier.com/locate/comphumbeh

Wauters, Robin. (2009, April 7). Facebook commits to Making Social Networking More Accessible for Visually Challenged Users. Retrieved May 20, 2009, from http://www.techcrunch.com/2009/04/07/facebook-commits-to-making-social-networking-more-accessible-for-visually-challenged-users/




Concept 24

File transfer protocol remains the best example of how the Internet enables files to be sent to and from clients, at their initiation, thus emphasising the local autonomy of the individual user, and the arrangement of ‘cyberspace’ into publicly accessible and changeable regions.” (Allen, n.d.)

P2P is conceivably one of the main factors of broadband take up in Australia. As a society that once dreamt of egalitarian ideals, the idea of “sharing” is readily defensible. Australians are amongst the most prolific down loaders of illegal content in the world. Total visits by Australians to BitTorrent websites including Mininova, The Pirate Bay, isoHunt, TorrentReactor and Torrentz grew from 785,000 in April last year to 1,049,000 in April this year, Nielsen says. This is a year-on-year increase of 33.6 per cent. (Moses, 2009) Music, movies, games, warez (software) all reside on suburban computers. Is this a rebellion or opportunism?

Fair use models are replacing draconian models of copyright protection around the globe. Not because of corporate philanthropy, more from corporate necessity. In academia as well, the offering of full text under various schemes to promote interchange of information for various reasons, have altered irrevocably the landscape of intellectual property. Meta harvesting and the Semantic Web will bring more pressure to bear on existing models of copyright and distribution of intellectual property, as institutions have to free up more data on open models to compete with other institutions doing the same thing to grab market share.

Social value vs market value is the consideration for us all. The growth of social networking and user generated content, implies that people are willing to create and share. They experience content and they create for others, without expectation of payment. So rather than creating for fiscal benefit, these contributions come more from an uncontrollable desire in some people to “share” their experiences and thoughts.

FTP has been historically a one way street of downloading, Web 2.0 plus an overall maturing of the users has led to a two way street and less emphasis on the commercial profits to be had from the Web. Are we moving away from a web of “taking / buying” to a web of “sharing” as we realise that if we co-operate more and profit less in the short term, we all profit in the long term.

Site 1
http://www.journals.uchicago.edu/doi/abs/10.1086/503518

Very well thought out legal position on the ongoing problem of music piracy and its perceived damage to the RIAA.

Site 2

http://www.ethics.org.au/

Plenty of resources here to explore the ethical world. Confronting personal knee jerk reactions with a measured ethical approach will benefit us all.

References:

Moses, Asher.,Illegal downloads soar as hard times bite. Retrieved May 22, 2009, from http://www.smh.com.au/news/home/technology/illegal-downloads-soar-as-hard-times-bite/2009/05/27/1243103577467.html


Monday, May 18, 2009

Module 5 - Information Ecologies

"As you read, think about the following questions – you may want to discuss with other students, or make notes in preparation for your concept project - LOG ENTRY: make sure you include some reflections on these questions in your learning log:

  • Q- "How might the metaphor of an ‘ecology’ impact on the way you think about, understand or use the Internet?"

A- It makes me of the people that make the web/net not the technology. It also makes me think of the inequities of access and sharing of information. As a big fan of open source mentality I think economic and information rich governments/corporations need to be careful of the balance in the information ecology as they are slowly becoming in the natural ecology. The market values and social values both have to be thought of in the highest policy decisions.
  • Q-"How are the concepts ‘information’ and ‘communication’ understood within the framework of an ‘information ecology’?"

A- Information is the ever growing and changing human knowledge base and communication is the means of retrieving and disseminating this information. Communication is the "flow" between the "nodes". (1)
  • Q- "Why don’t we talk of a ‘communication ecology’?"

A- I have to dispute that this concept is not in use. The Annanberg School of Communication at USC has published a lot on this concept.
"
A communication ecology approach comes closer to the reality of everyday life where people select new, traditional, and/or geo-ethnic media and interpersonal modes of communication from all of the options they have available to them. For example, we ask people to think about all of the ways they go about gaining understanding or getting information about their community, their health, etc. and to tell us which ways are most important to them. In this way, their responses are in context of their communication ecologies." (2)

(1) Rafael Capurro, Towards an Information Ecology. In: Irene Wormell (Ed.): Information and Quality. London: Taylor Graham 1990, 122-139.
(2) http://www.metamorph.org/research_areas/communication_ecology_and_icts/

Sunday, May 17, 2009

Module 4 - Evaluating the Web

"In your own words, write an annotation for the source which could communicate to a reader both your 'judgment' of the site according to what you have learnt from the tutorial, and also the following information:
the reliability and authority of the site / source / article
the main ideas or subjects discussed in the article
the purpose for which the site was written (this might include any apparent external interest, intellectual motivation or contextual information)
"

I consider this "source" to be the best of the three used in the last task.

Site 1 : http://www.isi.edu/in-notes/rfc959.txt
author : J. Postel, J. Reynolds.
institution : University of Southern California, Information Sciences Institute.
summary : The current RFC (959) for the File Transfer Protocol (FTP). An overview, history and technical specifications for FTP. Authors Postel and Reynolds. USC/ISI

Site/page evaluation: This page is part of the Information Sciences Institute, which is part of the University of Southern California, which establishes it authority. The original paper is from 1985 and this is still the current protocol. The protocol is heavily linked to by many educational institutions (in fact I had trouble finding anything else under the search keywords) and is also on the W3C site. This is the definition of FTP within TCP/IP and includes an overview and history of the protocol. It is a reference work and has no bias or agenda.



"Compare your final analysis and annotation with the material you saved for the last task, and think about these questions (you may wish to discuss these questions in your group)

in terms of your own future use, which 'body ' of information (ie. the original 'snapshot' of the site, or your own, annotated, analytical version) would be most useful to refer back to?

In term of external users (i.e. if you included this site as a hyperlink or resource on a website) which body of information would best help them judge if the site was useful or of interest to them?"

In this case there is not a lot of difference between the sites own intro ( which I didn't use) and my own. However I can see cases where there would be a difference, mainly in context. For my own use, the annotated bookmarks I use in del.ic.ious serve me well and are accessible away from my pc. I can include all of the necessary info within the bookmark form provided, including tags.

Once again due to the fact that this references a protocol, there is no big difference between the snapshot and the annotation for external users.

Module 4 - Organising search information task

"Choose the best three sources found in the previous task...using whatever software or tool you think is most appropriate, record the following information about these sites
  • URL
  • author
  • institution
  • blurb/summary/screenshot
For me the most effective tool for this is del.ic.ious, it can't record the screenshot, but can do everything else. I use the comments box for the summary of the site (In my own words as this is where I can add my context to it), the author and institution. I also add the author and institution to the tags, for easier searching.

Site 1 : http://www.isi.edu/in-notes/rfc959.txt
author : J. Postel, J. Reynolds.
institution : University of Southern California, Information Sciences Institute.
summary : The current RFC (959) for the File Transfer Protocol (FTP). An overview, history and technical specifications for FTP. Authors Postel and Reynolds. USC/ISI

Site 2 : http://www.youtube.com/watch?v=eA9mnY1Z2so
author : Prof.I.Sengupta
institution :
Department of Computer Science & Engineering ,IIT Kharagpur.
summary : Video lecture on Client Server Concepts DNS,Telnet,Ftp.
Prof.I.Sengupta. Department of Computer Science & Engineering ,IIT Kharagpur.

Site 3 : http://archive.salon.com/tech/col/rose/2001/07/20/napster_diaspora/index.html
author : Scott Rosenberg
institution : Salon.com emag
summary : Opinion piece on the post Napster possibilities of file sharing.
Scott Rosenberg. Salon.com emag.

As most of my specialised searches using my original keywords turned up dry information on the technical aspects of the protocol, I will have to devise better searches to get information on the social impacts of FTP, ie Napster, Bittorrent etc.

Module 4 - Boolean Search

"Taking the same key words of your last search, think about how you would best search for the following..."

  • The biggest number of hits relating to these keywords.
Results 1 - 10 of about 32,400,000 for file transfer protocol - Strategy = all the words no operators, but then I thought I could go one better...
Results 1 - 10 of about 1,510,000,000 for file OR transfer OR protocol - Strategy = any of the words individually.


  • Information most relevant to what you ACTUALLY wanted to look for
Results 1 - 10 of about 1,550,000 for "file transfer protocol" - Strategy = exact phrase to narrow search
Results 1 - 10 of about 428,000 for file transfer protocol history rfc - Strategy = more keywords to narrow search further.

  • Information coming only from University websites.
Results 1 - 10 of about 32,000 for file transfer protocol history "file transfer protocol " site:.edu - Strategy = restrict search to .edu domains


Sunday, May 10, 2009

Module 4 - Search engine task

Have installed Copernic as per previous task and will perform my search with it and Google. The keywords will be file transfer protocol , as this is one of the concepts that I will be using in my concepts assignment.

Google results -
First five Google results -
  1. http://en.wikipedia.org/wiki/File_Transfer_Protocol
  2. http://searchenterprisewan.techtarget.com/sDefinition/0,,sid200_gci213976,00.html
  3. http://www.faqs.org/rfcs/rfc959.html
  4. http://www.imagescape.com/helpweb/ftp/ftptop.html
  5. http://www.filetransferplanet.com/ftp-guides-resources/

Copernic results -
First five Copernic results -
  1. http://en.wikipedia.org/wiki/File_Transfer_Protocol
  2. http://www.faqs.org/rfcs/rfc959.html
  3. http://www.filetransferplanet.com/ftp-guides-resources/
  4. http://www.columbia.edu/kermit/
  5. http://searchenterprisewan.techtarget.com/sDefinition/0,,sid200_gci213976,00.html
Ok, comparing the two searches on the results above show the top 5 results almost identical, but number of results are radically different. I suspect this is a part configuration and part that the Google engine is not included the Copernic free software. Copernic is a bit overwhelming on first use, especially when you are used to Google. However I suspect that the full version in the hands of an experienced user would yield good results. Google scholar looked promising as well, led me to the SAGE Journals site, which I will register with for more research options. As Copernic had the only University result http://www.columbia.edu/kermit/ in the list, on this quick assesment I give it the honours. I think this task is sort of like getting new users to compare Windows photo editor and Photoshop. Similar but very different, I look forward to improving my searching techniques. (Not just in this unit but in my use of the web/net overall.)

Module 4 - Tools and plugins

Download and install at least 2 unfamiliar programs from the list and evaluate. I have and use most of the programs on the list except Copernic and the offline browsers, so we will try these. For the record though here is a quick run through of the whole list.

  1. Acrobat: I have version 9 which is the latest to date. I also use Acrobat Pro and Photoshop to create and edit pdfs.
  2. Have the latest Flash/Shockwave player and use Flash MX to create .fla and .swf files.
  3. Media players. I agree, you need more than one due to proprietry formats. I have and use Quicktime, Real One, WMP, VLC, DiVX, Ogg Vorbis.
  4. Have downloaded Copernic and after a quick trial it looks like a piece of software I will be using a lot more as I study more units. At first glance the tracking feature looks good and so do a lot of the features locked in the basic version. Will be upgrading to full version and will update this post when I have played with it. I will be using this in conjunction with Google for my concepts assignment research.
  5. I use del.ici.ous. And dragging current stuff to the bookmarks toolbar if it is of passing interest. At the cost of grade points I cannot bring myself to download and install anything that has 'buddy' in the title. Especially for Windows.
  6. Will try these.