Sunday, May 24, 2009

Concepts Assignment

Concept 28


“Advanced Internet users recognise the character of the Web, seek to utilise its advantages, ameliorate its deficiencies and understand that not all users have the same abilities as themselves in reconciling the paradox of the WWW.” (Allen n.d.)


Cloud computing seeks to take advantage of the paradox facing new users of computers and the WWW. Namely to make tasks and applications appear easy, when at a deeper level they are complex, requiring skill and knowledge.

What is cloud computing anyway? The ‘cloud’ refers to data centres that sell excess storage and computing power to individuals, businesses and governments to amortize the cost of setting up and running said data centre. Many users already avail themselves of these services without realizing the underlying cloud component and paying via their viewing of placed advertisements. Some of the common ‘clouds’ include, email (Gmail, Yahoo!, and Hotmail), search (Google, Yahoo! And MSN), applications (Google docs, Mobile Me, Evernote), storage/distribution (Flickr, YouTube) and communities (Facebook, MySpace) . In fact according to a recent survey by Pew Internet and the American Life project, 69% of users are already taking advantage of the ‘clouds’ (Horrigan 2008).

In the tasks undertaken in this unit, the ‘cloud’ most accessed was Google, and this ‘cloud’ made the initial searching required very easy, belying the complex functions being undertaken on this users behalf by the ‘cloud’. As the unit progressed and advanced searching techniques were introduced, it became apparent that the use of the ‘cloud’, whilst beneficial initially, had encouraged a lazy use and less understanding of the underlying concepts underpinning the WWW.

However an advanced user, aware of the ‘cloud’ and its implications, would be able to use the ‘cloud’ services where appropriate (or convenient) and be aware of the downside and other more complex but better tools and resources available. Advanced users also have a social responsibility to signpost the way and help educate novice users about the implications of always taking the “easy” way. Educating and assisting novice users to partake of the deeper web and look for diversity in resources help to advance the Web by way of diversity and decentralization.

So are the gathering ‘clouds’, good or bad in relation to the inherent nature of the Web, being decentralized, innovative and complex? I would argue that in relation to innovation, diversity and decentralisation, ‘clouds’ are inherently bad, as first and foremost, they centralise data and applications, forcing users into proprietary systems. Innovation is stifled as users do not have to deal with information in a variety of formats. It also subverts the freedom of the internet by raising the possibility of corporations or governments having increased control over the data stored and manipulated by these data centres.

However these bad outcomes may be offset or canceled out altogether by the benefits to universities and business being able to effect better outcomes due to the economies delivered by the ‘clouds’. If the bigger players lay down an ethical and fair playing field then the providers may resist the temptation to ‘control’ the data they store, or to push users into ever more expensive proprietary systems.

‘Clouds’ also offer the major benefit of mobility of data and browser based applications, making using the Web easier and more accessible. The trade off, of more new users that contribute to the web, versus the homogenisation and centralisation of the Web, illustrates another Web paradox.

Annotated Sites:

Site 1: University of California, Berkeley. Reliable Distributed Systems Laboratory. http://radlab.cs.berkeley.edu/

Cutting edge use of cloud computing. This site is from a major U.S. university with backing from internet heavyweights, Google, Microsoft and Sun Microsystems. Their mission statement reads something like, “We want to enable an individual to invent and run the next big IT service.”

Although not a reference site I felt it was important because of their mission statement. Because the ‘cloud’ will centralize data and services, I think it is the universities especially that will provide 'cloud' service providers and users with examples of an ethical approach and one that promotes diversity and innovation. So even if these guys are promoting their ‘cloud’ vision, they are also enabling individuals to provide new and unique services.


Site 2: First Monday. http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/index

Peer reviewed internet journal. Clean and clear layout and navigation, with open access for readers and contributors. This site is the future of journals if we are to follow a model of social value vs market value in disseminating academic information. Also has a great tool in allowing references to be imported straight into Endnote. Being peer reviewed, the papers are of high quality. First Monday also allows access to their latest papers free of charge. The website is provided by the University of Illinois and allows users to register for other journals within the university.

References:

Allen, M. (n.d.). "Internet Communications Concepts Document." Net 11 Internet Communications Retrieved 20 May, 2009, from http://lms.curtin.edu.au/webapps/portal/.

Horrigan, J. (2008). "Use of cloud computing applications and services." Retrieved 20 May, 2009, from http://pewinternet.org/Reports/2008/Use-of-Cloud-Computing-Applications-and-Services.aspx.

Additional sources:

Jaeger, P., J. Lin, et al. (2009, 10 May 2009). "Where is the cloud? Geography, economics, environment, and jurisdiction in cloud computing." First Monday Retrieved 20 May, 2009, from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/2456/2171.




Concept 11

“Advanced Internet users learn to intuitively conceive of any document, file, message or communication as consisting of metadata and data. They then can explore the functions of various communications/information software looking for how that software can assist them in using metadata to enable sorting, processing or otherwise dealing with that data.” (Allen n.d.)

With the mind-bending amount of data on the web, a system to describe the data in a container extensively is required. This is the job of metadata, literally data about data. Like the familiar Dewey Decimal Classification used in libraries, the purpose of metadata is to help locate appropriate resources. However metadata goes further than conventional systems by allowing for a lot more information to be connected to a resource. The Dublin Core system allows for fifteen metadata elements that describe in three groups the resource’s; content, intellectual property and instantiation.


At its most basic level shareable metadata should be human understandable. (Shreeves, Riley et al. 2006). While completing the Net11 unit tasks and preparing to do the referencing for the concepts assignment, del.icio.us proved to be a great tool. Through the use of its keyword tagging and notes metadata, bookmarks are more easily searched and put into context. This not only speeds up referencing, but provides ‘breadcrumbs’ to bring a mind back to where it was when creating the bookmark.


We are seeing an evolution from the current Web of documents towards a Web of linked data and the broad benefits this brings. (Hall, De Roure et al. 2009) To facilitate this new Semantic web, we need intelligent agents and bots to aggregate the metadata and this will require the big data collections to be more careful with their use of metadata. Rather than stuffing a single record in say the Dublin core schema with all the information relating to a piece of data, several more individually sensible records should be created. This is with the aim of making the most of the data aggregator bots that create metadata collections.


Carl Lagoze has argued that “metadata is not monolithic ... it is helpful to think of metadata as multiple views that can be projected from a single information object” (Lagoze, 2001). In photography there is an increasing use of metadata in editing images, not just defining them. When you edit an image in its Raw state, you change the attributes of its associated xml file but leave the image data untouched. This means you have one base file and then another file that act like a filter that you perceive the image through, allowing totally non destructive editing, but perhaps more importantly means a big saving in library size. Will we have a web of base files with intelligent filters that we view this base information through, then edit and append to suit our purposes without editing the base file?


Site 1: http://dublincore.org/

This is the website of the Dublin Core initiative, a not for profit organisation, helping develop open standards for interoperable metadata. A great resource site for all things metadata. There are tools for adding metadata to web pages, a firefox plugin for viewing embedded metadata in HTML and XHTML documents. There is plenty of information about current schemas and upcoming proposals.


Site 2: http://www.oaister.org/about.html

This site is the result of the move to a more semantic web. It uses the Open Archives Initiative Protocol for Metadata Harvesting to find data in the deep web, hidden behind scripts in institutional databases. As more organizations come on board, this will become a very powerful resource, maybe Oaister is the new Google.

References

Dublin Core (2009)

Retrieved May 20, 2009 from http://dublincore.org/

Hall, W., D. De Roure, et al. (2009). "The evolution of the Web and implications for eResearch." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 367(1890): 991-1001.

Lagoze, Carl 2001. “Keeping Dublin Core Simple: Cross Domain Discovery or Resource Description?” D–Lib Magazine, volume 7, number 1 (January), at http://dlib.anu.edu.au/dlib/january01/lagoze/01lagoze.html, accessed 18 May 2009

Oaister (2009)

Retrieved May 20, 2009

Shreeves, S., J. Riley, et al. (2006) "Moving towards shareable metadata." First Monday Retrieved 18 April 2009, from http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/1386/1304




Concept 8

“The daily practice of electronic communication is shaped by over-familiarity with one's own computer system, and a tendency to assume that – as with much more established forms of communication – everyone is operating within compatible and similar systems. When in doubt, seek to communicate in ways that are readable and effective for all users, regardless of their particular systems.” (Allen, n.d.)

Web design should, in theory, be about making a site as accessible and useful to as many people as possible. Countless dollars and time are spent on corporate websites meant to improve the publics awareness of a brand yet frequently these site are not accessible to people who are disabled. Designers and administrators have many tools and resources available to them to enable their sites to comply with access guidelines, yet an animated splash page seems to be more important than reaching there full potential market.


A diverse range of people of all ages and ability access the web. Levels of ability include physical, mental, education, available technology and socio-economic factors. Designers and administrators should be conscious of these differing levels of ability and take these possibilities into account when planning and executing web deployments. “The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect” (Berners-Lee, 1997)

The BBC’s Ouch! Website is an exemplary example of site design that looks good, uses multimedia, but jumps through all the accessibility hoops. It has options for changing font size and colours site wide (using cookies and css) so you don’t destroy the layout, there is a text only version for Lynx browsers and you can navigate it without a mouse, simple stuff but recent studies point out that large percentages of sites, (70-98%, depending on the category of site) are not accessible. (Lazar et al,. 2004)

Accessibility of an organisation’s website, not only for the physically and mentally disabled, but for the elderly and uneducated, improves the brand. It also improves the overall internet experience when it is standard practice, because for a site to be fully accessible and complying with the W3C/WAI standards, it employs strict xhtml and css. If for this reason alone it would be good practice for all web designers to strive for full accessibility in the sites they design.


Web 2.0 leader Facebook is actively working with the American Foundation for the Blind (AFB) to improve it social networking service, as according to the AFB’s statistics, there are 20 million Americans who have reported significant vision loss. (Wauters, 2009) The other Web 2.0 (and possibly the Semantic Web) leader Google, is also contributing with research and implementation of accessible design for its basic search engine and other services.

Site 1:

http://www.w3.org/WAI/

This is the W3C’s initiative to develop guidelines for Web accessibility, as well as provide support materials and resources to help understand and implement Web accessibility. Extensive information about the existing standards and the working papers for the standards for tomorrow as well as links to all the important resources neede to implement accessible web design.

Site 2:

http://www.bbc.co.uk/ouch/

One of the best website designed for accessibility that’s looks and functions exactly as we would expect a “normal” site to function. Proves the point that you do not have to sacrifice design to achieve accessibility. Uses multimedia including podcasts and images, blogs and message boards, but has alternatives far users who need others options to use the site and access its information.

References:

Berners-Lee, Tim (1997) on the launch of the International Program Office for Web Accessibility Initiative - http://www.w3.org/Press/IPO-announce

Lazar, Jonathon., Dudley-Sponaugle, Alfreda., Greenidge, Kisha-Dawn., (2004). Improving web accessibility: a study of webmaster perceptions. Computers in Human Behaviour 20, pages 269-288. Retrieved May 18, 2009 from www.elsevier.com/locate/comphumbeh

Wauters, Robin. (2009, April 7). Facebook commits to Making Social Networking More Accessible for Visually Challenged Users. Retrieved May 20, 2009, from http://www.techcrunch.com/2009/04/07/facebook-commits-to-making-social-networking-more-accessible-for-visually-challenged-users/




Concept 24

File transfer protocol remains the best example of how the Internet enables files to be sent to and from clients, at their initiation, thus emphasising the local autonomy of the individual user, and the arrangement of ‘cyberspace’ into publicly accessible and changeable regions.” (Allen, n.d.)

P2P is conceivably one of the main factors of broadband take up in Australia. As a society that once dreamt of egalitarian ideals, the idea of “sharing” is readily defensible. Australians are amongst the most prolific down loaders of illegal content in the world. Total visits by Australians to BitTorrent websites including Mininova, The Pirate Bay, isoHunt, TorrentReactor and Torrentz grew from 785,000 in April last year to 1,049,000 in April this year, Nielsen says. This is a year-on-year increase of 33.6 per cent. (Moses, 2009) Music, movies, games, warez (software) all reside on suburban computers. Is this a rebellion or opportunism?

Fair use models are replacing draconian models of copyright protection around the globe. Not because of corporate philanthropy, more from corporate necessity. In academia as well, the offering of full text under various schemes to promote interchange of information for various reasons, have altered irrevocably the landscape of intellectual property. Meta harvesting and the Semantic Web will bring more pressure to bear on existing models of copyright and distribution of intellectual property, as institutions have to free up more data on open models to compete with other institutions doing the same thing to grab market share.

Social value vs market value is the consideration for us all. The growth of social networking and user generated content, implies that people are willing to create and share. They experience content and they create for others, without expectation of payment. So rather than creating for fiscal benefit, these contributions come more from an uncontrollable desire in some people to “share” their experiences and thoughts.

FTP has been historically a one way street of downloading, Web 2.0 plus an overall maturing of the users has led to a two way street and less emphasis on the commercial profits to be had from the Web. Are we moving away from a web of “taking / buying” to a web of “sharing” as we realise that if we co-operate more and profit less in the short term, we all profit in the long term.

Site 1
http://www.journals.uchicago.edu/doi/abs/10.1086/503518

Very well thought out legal position on the ongoing problem of music piracy and its perceived damage to the RIAA.

Site 2

http://www.ethics.org.au/

Plenty of resources here to explore the ethical world. Confronting personal knee jerk reactions with a measured ethical approach will benefit us all.

References:

Moses, Asher.,Illegal downloads soar as hard times bite. Retrieved May 22, 2009, from http://www.smh.com.au/news/home/technology/illegal-downloads-soar-as-hard-times-bite/2009/05/27/1243103577467.html


Monday, May 18, 2009

Module 5 - Information Ecologies

"As you read, think about the following questions – you may want to discuss with other students, or make notes in preparation for your concept project - LOG ENTRY: make sure you include some reflections on these questions in your learning log:

  • Q- "How might the metaphor of an ‘ecology’ impact on the way you think about, understand or use the Internet?"

A- It makes me of the people that make the web/net not the technology. It also makes me think of the inequities of access and sharing of information. As a big fan of open source mentality I think economic and information rich governments/corporations need to be careful of the balance in the information ecology as they are slowly becoming in the natural ecology. The market values and social values both have to be thought of in the highest policy decisions.
  • Q-"How are the concepts ‘information’ and ‘communication’ understood within the framework of an ‘information ecology’?"

A- Information is the ever growing and changing human knowledge base and communication is the means of retrieving and disseminating this information. Communication is the "flow" between the "nodes". (1)
  • Q- "Why don’t we talk of a ‘communication ecology’?"

A- I have to dispute that this concept is not in use. The Annanberg School of Communication at USC has published a lot on this concept.
"
A communication ecology approach comes closer to the reality of everyday life where people select new, traditional, and/or geo-ethnic media and interpersonal modes of communication from all of the options they have available to them. For example, we ask people to think about all of the ways they go about gaining understanding or getting information about their community, their health, etc. and to tell us which ways are most important to them. In this way, their responses are in context of their communication ecologies." (2)

(1) Rafael Capurro, Towards an Information Ecology. In: Irene Wormell (Ed.): Information and Quality. London: Taylor Graham 1990, 122-139.
(2) http://www.metamorph.org/research_areas/communication_ecology_and_icts/

Sunday, May 17, 2009

Module 4 - Evaluating the Web

"In your own words, write an annotation for the source which could communicate to a reader both your 'judgment' of the site according to what you have learnt from the tutorial, and also the following information:
the reliability and authority of the site / source / article
the main ideas or subjects discussed in the article
the purpose for which the site was written (this might include any apparent external interest, intellectual motivation or contextual information)
"

I consider this "source" to be the best of the three used in the last task.

Site 1 : http://www.isi.edu/in-notes/rfc959.txt
author : J. Postel, J. Reynolds.
institution : University of Southern California, Information Sciences Institute.
summary : The current RFC (959) for the File Transfer Protocol (FTP). An overview, history and technical specifications for FTP. Authors Postel and Reynolds. USC/ISI

Site/page evaluation: This page is part of the Information Sciences Institute, which is part of the University of Southern California, which establishes it authority. The original paper is from 1985 and this is still the current protocol. The protocol is heavily linked to by many educational institutions (in fact I had trouble finding anything else under the search keywords) and is also on the W3C site. This is the definition of FTP within TCP/IP and includes an overview and history of the protocol. It is a reference work and has no bias or agenda.



"Compare your final analysis and annotation with the material you saved for the last task, and think about these questions (you may wish to discuss these questions in your group)

in terms of your own future use, which 'body ' of information (ie. the original 'snapshot' of the site, or your own, annotated, analytical version) would be most useful to refer back to?

In term of external users (i.e. if you included this site as a hyperlink or resource on a website) which body of information would best help them judge if the site was useful or of interest to them?"

In this case there is not a lot of difference between the sites own intro ( which I didn't use) and my own. However I can see cases where there would be a difference, mainly in context. For my own use, the annotated bookmarks I use in del.ic.ious serve me well and are accessible away from my pc. I can include all of the necessary info within the bookmark form provided, including tags.

Once again due to the fact that this references a protocol, there is no big difference between the snapshot and the annotation for external users.

Module 4 - Organising search information task

"Choose the best three sources found in the previous task...using whatever software or tool you think is most appropriate, record the following information about these sites
  • URL
  • author
  • institution
  • blurb/summary/screenshot
For me the most effective tool for this is del.ic.ious, it can't record the screenshot, but can do everything else. I use the comments box for the summary of the site (In my own words as this is where I can add my context to it), the author and institution. I also add the author and institution to the tags, for easier searching.

Site 1 : http://www.isi.edu/in-notes/rfc959.txt
author : J. Postel, J. Reynolds.
institution : University of Southern California, Information Sciences Institute.
summary : The current RFC (959) for the File Transfer Protocol (FTP). An overview, history and technical specifications for FTP. Authors Postel and Reynolds. USC/ISI

Site 2 : http://www.youtube.com/watch?v=eA9mnY1Z2so
author : Prof.I.Sengupta
institution :
Department of Computer Science & Engineering ,IIT Kharagpur.
summary : Video lecture on Client Server Concepts DNS,Telnet,Ftp.
Prof.I.Sengupta. Department of Computer Science & Engineering ,IIT Kharagpur.

Site 3 : http://archive.salon.com/tech/col/rose/2001/07/20/napster_diaspora/index.html
author : Scott Rosenberg
institution : Salon.com emag
summary : Opinion piece on the post Napster possibilities of file sharing.
Scott Rosenberg. Salon.com emag.

As most of my specialised searches using my original keywords turned up dry information on the technical aspects of the protocol, I will have to devise better searches to get information on the social impacts of FTP, ie Napster, Bittorrent etc.

Module 4 - Boolean Search

"Taking the same key words of your last search, think about how you would best search for the following..."

  • The biggest number of hits relating to these keywords.
Results 1 - 10 of about 32,400,000 for file transfer protocol - Strategy = all the words no operators, but then I thought I could go one better...
Results 1 - 10 of about 1,510,000,000 for file OR transfer OR protocol - Strategy = any of the words individually.


  • Information most relevant to what you ACTUALLY wanted to look for
Results 1 - 10 of about 1,550,000 for "file transfer protocol" - Strategy = exact phrase to narrow search
Results 1 - 10 of about 428,000 for file transfer protocol history rfc - Strategy = more keywords to narrow search further.

  • Information coming only from University websites.
Results 1 - 10 of about 32,000 for file transfer protocol history "file transfer protocol " site:.edu - Strategy = restrict search to .edu domains


Sunday, May 10, 2009

Module 4 - Search engine task

Have installed Copernic as per previous task and will perform my search with it and Google. The keywords will be file transfer protocol , as this is one of the concepts that I will be using in my concepts assignment.

Google results -
First five Google results -
  1. http://en.wikipedia.org/wiki/File_Transfer_Protocol
  2. http://searchenterprisewan.techtarget.com/sDefinition/0,,sid200_gci213976,00.html
  3. http://www.faqs.org/rfcs/rfc959.html
  4. http://www.imagescape.com/helpweb/ftp/ftptop.html
  5. http://www.filetransferplanet.com/ftp-guides-resources/

Copernic results -
First five Copernic results -
  1. http://en.wikipedia.org/wiki/File_Transfer_Protocol
  2. http://www.faqs.org/rfcs/rfc959.html
  3. http://www.filetransferplanet.com/ftp-guides-resources/
  4. http://www.columbia.edu/kermit/
  5. http://searchenterprisewan.techtarget.com/sDefinition/0,,sid200_gci213976,00.html
Ok, comparing the two searches on the results above show the top 5 results almost identical, but number of results are radically different. I suspect this is a part configuration and part that the Google engine is not included the Copernic free software. Copernic is a bit overwhelming on first use, especially when you are used to Google. However I suspect that the full version in the hands of an experienced user would yield good results. Google scholar looked promising as well, led me to the SAGE Journals site, which I will register with for more research options. As Copernic had the only University result http://www.columbia.edu/kermit/ in the list, on this quick assesment I give it the honours. I think this task is sort of like getting new users to compare Windows photo editor and Photoshop. Similar but very different, I look forward to improving my searching techniques. (Not just in this unit but in my use of the web/net overall.)

Module 4 - Tools and plugins

Download and install at least 2 unfamiliar programs from the list and evaluate. I have and use most of the programs on the list except Copernic and the offline browsers, so we will try these. For the record though here is a quick run through of the whole list.

  1. Acrobat: I have version 9 which is the latest to date. I also use Acrobat Pro and Photoshop to create and edit pdfs.
  2. Have the latest Flash/Shockwave player and use Flash MX to create .fla and .swf files.
  3. Media players. I agree, you need more than one due to proprietry formats. I have and use Quicktime, Real One, WMP, VLC, DiVX, Ogg Vorbis.
  4. Have downloaded Copernic and after a quick trial it looks like a piece of software I will be using a lot more as I study more units. At first glance the tracking feature looks good and so do a lot of the features locked in the basic version. Will be upgrading to full version and will update this post when I have played with it. I will be using this in conjunction with Google for my concepts assignment research.
  5. I use del.ici.ous. And dragging current stuff to the bookmarks toolbar if it is of passing interest. At the cost of grade points I cannot bring myself to download and install anything that has 'buddy' in the title. Especially for Windows.
  6. Will try these.

Sunday, April 26, 2009

Module 3 - Web 2.0

FURL vs html list.

I haven't bookmarked a page in my browser since I discovered del.ic.ous. There is no comparison between the FURL list and html. FURL allows tagging and sorting. This user added metadata makes the data relevant to the user and their tasks. The added benefit of allowing others access to your list gives FURL the social networking advantage as well.

I see the differences between them as the same websites and blogging, hard vs easy. For beginners, it's a no brainer, once an expert you may need more control. As I am no expert the choice is easy.

Module 3 - Blogs

"In your learning log, record your thoughts. Consider various uses for blogs such as citizen journalism and personal blogging. Have you seen in your net travels any interesting uses for blogs? This blog entry is an opportunity to tell us what you really think of blogging!"

On Blogcatalog alone there are 60 categories for Blogs! In each category there is a plethora of blogs under that general heading, most with their own specialist approach. For instance in the Computer category there is a blog specialising in netbooks, while another is a take on computing from a female oriented point of view. So this question may have been easier if it was "what uses can't you put a blog to?".

After all this blogging and reading of blogs, I view them now as template driven web 2.0 sites, rather than personal diaries. I was never interested in blogs because of all the "noise" out there, but have changed my mind after due consideration. I know see them as a great access point for writers and readers alike. For the writers no need to get your head around html and css, for the readers clean and consisten navigation and familiar format ( even across different blog platforms).

My blog standout for this week would be Dennis Jerz, this blog/site (and the distinction is really getting blurred) has plenty of resources and helped me a lot.

Module 3 - Standards

Optional 'standards' task 1

"Make a summary of what you believe are the 5 most important rules for writing online."

You know you content is good, no great, right? So the most important thing is to capture a user and hold them. So with that in mind this is my top 5 in order.

  1. Relevant. In the first second, as your visitor's eyes flick over your page and their brain looks for metadata matches, they better see information, or links to information that is relevant to why they navigated to your page in the first place, or they are gone.
  2. Best and latest content at the top. Make them scroll and you increase the percentage of users that depart before they see you great work.
  3. Concise. Get to the point! Then provide links to more detailed information. Annotated links are a great way to do this.
  4. Scannable. Smart use of headings, lists, meaningful and annotated links, will engae the user and build trust.
  5. Well laid out. To some, conventions are boring, but in the case of layout, colour and font usage, there is a lot of time money and effort put into establishing the most efficient way to present information to people. Experiment by all means, but ignore conventions at your peril. Jakob Nielsen's eye scanning research is one good example to bear in mind when laying out your page.


Optional 'standards' task 2

Validate your page:

In this task we have to validate our code at the W3C html validator service. Then record our thoughts on the results. Page passed, first time!



Examine copyright issues:

All of the content on my page is mine including the image of the printed circuit board. If I had put the Curtin logo at the top of my page without permission I would have been in breach of copyright as the logo belongs to the university. If I used it as part of a task I would have acknowledged the source and sought permission to use it.


ftp:

I uploaded my page and the image associated with it to the server that my business website resides on. I used the browser based file management software, but could have done the same thing with filezilla.

The url of my webpage is http://www.pennanthillsframing.com.au/net11page.html

Module 3 - HTML tags

For this task we had to do a tutorial on html and then make a single page incorporating the skills taught in the tutorial. As a Dreamweaver user, I have not done much hand coding, but I always looked at my source code and tweaked the inevitable layout problems. So this task was not at all foreign to me as I understood all the concepts and had some previous practice.

However I did get a kick out of handcoding from start to finish, although it did seem very time consuming. With practice and some tools, I am sure this will become easier and more natural. Everyone I know who went from WYSIWYG editors to handcoding have said they would never go back. I trust them :)

In comparison to blogging, well it's the same thing really, just different interface. Really experienced bloggers would have no problem handcoding a site, but blogging is easier, with a shallower learning curve vs html. Plus no ftp and a stable platform for archiving. Handcoding does give flexibility and greater control and knowledge though. For me in context of keeping this journal, I prefer blogging for speed and ease over html. For my business site I wouldn't consider blogging.


Here is the link!



Didn't have any real problems with this task except for getting the list centred while keeping the bullets aligned. Had to go with a table.

Monday, April 20, 2009

Module 2 Chat

Chat task is to choose a method, then arrange to meet with other students in the unit and have a chat.

Chose ICQ because it was first on the list and I figured most students would go for it. However as I was one of the last to finish this task it wouldn't have mattered which method I chose. I have IRC'ed before and found it clunky (although less intrusive on my pc system.) I play Entropia regularly and have dabbled in Second Life, but thought it was unrealistic to expect people to download and install either of these large and graphics intensive programs to chat with.

After installing ICQ, I was able to meet up with Amanda for a quick one on one chat and then with Jessica for a group chat. One on one was ok although not very efficient, waiting for responses. Group was laggy with one persons response overlapping with anothers question. Overall I woulde rate chat as recreational communication.

Friday, April 17, 2009

Module 2 Newsgroups

Newsgroups task. Get a reader working, choose a group, post a message, record to journal.

After reading through the task information, I tried web based services first...and they sucked. Outlook was already installed and ready to go from the email task so off we go.

First stop ISP for some info, for once Telstra's part of the deal went smoothly. Opened Outlook started newsreader, entered the news reader server name and dowloaded newsgroup list. Pretty quick, couple of minutes, but we are on dsl2.0.
In my business we use Photoshop heavily so it was the obvious choice as a group topic to choose.



After subscribing I downloaded all the most recent (300) messages and started to have a read.



Well flame on! I thought that teenage gamer sites inhabited by crusty, old school, text based, adults had the worst flamers, but ouch. The first thread made me wince. Get past that and there is some good content but wow people are narky when they aren't held to account.

I found a thread about graphics tablets, which are devices for drawing digitally and posted a reply. Here is a screen shot, and cut and paste.





ahall@no-spam-panix.com

Re: Tablet recommendations

wrote in message news:...
>
> I think I will finally get a tablet (mainly for photoshop,
> but I might use it in AE and Illustrator too).
>
> I really do not know how to size it. Would 6x8 be big enough
> for non-pro use? That should fit nicely on my desk.
>
> How about the 6D pen? Is that a worthy upgrade?
>
> Thanks in advance,
>
>
> --
> Andrew Hall
> (Now reading Usenet in comp.graphics.apps.photoshop...)

Hi Andrew,

I have a Wacom 6x8 which I use professionally in my wide format printing business.
I use it mainly for selecting in Photoshop. I think it is a good size for this
purpose. I would recommend that you consider a wireless keyboard and mouse so that
you can put them to one side when you want to use the tablet. Can't help you with
the stylus question as I have only used the one that came with it. To get a more
realistic feel when drawing, I place a sheet of paper on the tablet for friction.

Jason Radich.



Thursday, April 16, 2009

Module 2 Lists

"What are the pros and cons of email list vs discussion boards?"

Email lists

Pros
  • If a user is interested in multiple groups/interests/hobbies, email list will deliver all of this information to one place. Making this an efficient way to keep up to date.
  • Low bandwidth. If users are on a slow, expensive (rural, developing economies, mobile devices) or shared connection this will be important.
  • Being text based with attachments frowned upon or banned, lists are good for access by mobile devices.
Cons
  • Volume can be cumbersome.
  • Spam
  • No attachments
  • Off topic frowned upon or banned.
  • Less interactive than boards. This is not true for all lists as a lot of online communities have boards as well.
Discussion boards

Pros
  • Provides a sense of home or identity via a central location.
  • Contains many areas of discussion so users can go off topic.
  • Attachments and HTML.
Cons
  • High bandwidth.
  • More casual users means a generally lower quality of posts and lots of post that don't add anything to the discussion.
  • More anonymity for users can mean more flaming and offensive posting.
  • Requires users with multiple interests to visit many places, frequently to keep up to date.

Wednesday, April 15, 2009

Module 2 Email Tasks - 5

"How have you organised the folder structure of your email and why?"

Aside from my Inbox the two root folders are Business and Personal. Then the next layer is by organisation/sender, with some generic folders like Newsletters. Once agin due to the low volume of email that I receive, this is all the folder management I need.
Deleted, sent and draft are obviously the system folders I use as well.

Monday, April 6, 2009

Module 2 Email Tasks - 4

"What sort of filters do you have set up and for what purposes?"

Incoming mail passes through a spam filter first for obvious reasons, although like everyone I occasionally have to trawl through the spam folder to find legitimate mail. This is better than being confronted every log in by hundreds of annoying and offensive messages.
What few messages make it through the spam filter are then sorted by priority and sender. I only average 10-20 messages per day so these two filters are sufficient to prioritise and sort my mail.

Module 2 Email Tasks - 3

"In what ways can you ensure that an attachment you send will be easily opened by the receiver?"

The foolproof way is to communicate your intentions with the recipient prior to sending the file to ascertain what software they have and supply them a file format that is compatible. If this is not possible, stick to open formats (as opposed to proprietary) or formats that have a free and readily available reader/viewer/player. A link to this software and brief instructions on how to use it would be polite.

Some examples of open file formats are:

Document
  • PDF - Portable Document Format. Formerly proprietary, this format was made 'open' in 2008. It is a container for documents that have text, bitmap and vector graphics.
  • ODF - Open Document Format. Open office format for documents. Made popular by Sun Microsystems Open Office suite.
  • HTML - Hyper Text Markup Language.

Multimedia

  • PNG - Portable Network Graphics. Lossless bitmap image format developed to replace GIF.
  • SVG - Scalable Vector Graphics. XML based file format developed by the W3C.

Module 2 Email Tasks - 2

After yesterday's epic effort, I will try to be succinct on this question, seems straightforward.

"In what cases would you find it useful to use the 'cc', 'bcc' and 'reply all' functions of email?"

'Cc' means carbon copy and is to add secondary addresses to a message. Usually the cc recipients are being kept in the loop with visibility, but are not expected to reply to , or act on the message. An example of this would be a department head sending requests or instructions to managers (who would be in the 'To' field, and also sending the email a managing director(s) who would be in the 'Cc' field.

'Bcc' means blind carbon copy so you can have multiple recipients who aren't aware of each other. For example inviting job candidates to a second interview.
'Reply all' is used instead of 'Reply' to reach all recipients (To/Cc/Bcc). For instance an email is sent out to a sporting organisation regarding an impending event and has a major error regarding the address, you could reply all to makes sure everyone receiving the original email is aware of this.

Phew, that was easy, and not a screenshot in sight.

Sunday, April 5, 2009

Module 2 Email Tasks - 1

In the words of Gloria Gaynor, "I am back, from outer space!". I have baulked at this module because of the email tasks. Web based mail has been my choice after all the pain of maintaining .pst file backups from within Outlook and the hassle of migrating isp's (and therefore email addresses).

Yahoo has been my primary mail account for years. Today I have finally buckled down and set up an additional mail address on my isp's server, set up Outlook and my shiny new Iphone with this new mail account. Of course everything was easy, except the Telstra part.

Now over to the Blackboard to do the "Email Tutorial". Back soon.

Task 1 : What information about a user's email, the origin of the message and the path it took can you glean from an email message?

This task seemed simple on the surface but needed a bit of reading to complete.

http;//www.visualware.com/resources/tutorials/email.html

http://www.sendmail.org/dkim/technicalOverview.html

These are the two sources that quickly and clearly explained the syntax used in the headers, but I also read wiki entries and other pages that gave partial information. The email header convention is like html headers, they are slighlty different btween versions and systems.. Header content depends on application used to create the message (plus deliberate spoofing) and the mailserver systems passed through.

For this task I sent myself a message from Yahoo to my new Bigpond address. Then in Outlook, right click on the message in the Inbox. Select Options from the menu, Internet headers.



Return-Path:
Received: from nskntingx07p.mx.bigpond.com ([66.163.178.121])
by nskntmtas06p.mx.bigpond.com with ESMTP
id <20090406003140.hnm57.nskntmtas06p.mx.bigpond.com@nskntingx07p.mx.bigpond.com>
for ; Mon, 6 Apr 2009 00:31:40 +0000
Received: from web34206.mail.mud.yahoo.com ([66.163.178.121])
by nskntingx07p.mx.bigpond.com with SMTP
id <20090406003139.dbli17747.nskntingx07p.mx.bigpond.com@web34206.mail.mud.yahoo.com>
for ; Mon, 6 Apr 2009 00:31:39 +0000
Received: (qmail 12471 invoked by uid 60001); 6 Apr 2009 00:31:37 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1238977897; bh=WQQeqrN9VMa1DV024/4J/HQCl+gGBcPoUDZpkwg1pWU=; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type; b=oG+o3CAFfXV46okSpfreGb4h5MBk66iEnxaUhq335YMRcginKddhSlqhbRW/zd64i7e3lG7LXnqKBpto/L02Giqr0PNkkwCKuojpjurvX4LaScaUj/sDGJBWiMzKF3f9K3lc59T0VgO2OXDN1PwQBqZAm0AeBKNyLIFMl6wliTs=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
s=s1024; d=yahoo.com;
h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:MIME-Version:Content-Type;
b=UsmafdJMrygFdl54dqTYkZvoY4it/VtZW2cXeSsoH9G+AKuI1NBQp3w6bDaINdmU8XscZUs0HX2CT+8L5Tm7tgOj8117JSUFqJ6eTiPmotHA2S9vtyKrIqGcRQr4wKOk598RH81KMNHd/ZsFF8W5/Zj+vvd9HM4Cx+3MBA/9J4E=;
Message-ID: <292351.8925.qm@web34206.mail.mud.yahoo.com>
X-YMail-OSG: InYFs7gVM1l78.2oqOmdVfJXaTNoqcZEI0ysXA4dWbNx7d9z6wlFfSur3GlUkLjfQdJK7ueh3fyLfpziELrSjiHNUXZPKSRB51YXeb_qYPx7OCWhNMSVYZpfBRpBdPg7NgLGilc7hzjnVXF3cwRZKZ56EydeC2uuMRRYoaXaovVXNQW7urEhDR2NXKfgWv4yGt5H9IWh81tq7pLHbxyQMc8fi.Wz.VM.RDK1r0BsYZDkmQVFT5C9uoonvKpg.hSuAcdQFoCDJfrh5Qfj.MlRw.DZQIXODQQO.WzY9k0.evYKPIsKiqXP78e51SMNQDJNIRINQBcp4lNMcNIyO75soZhhPNGreLVmKo7Rpa90h2Y-
Received: from [121.218.223.76] by web34206.mail.mud.yahoo.com via HTTP; Sun, 05 Apr 2009 17:31:37 PDT
X-Mailer: YahooMailWebService/0.7.289.1
Date: Sun, 5 Apr 2009 17:31:37 -0700 (PDT)
From: Jason Radich
Subject: test
To: jason_radich@bigpond.com
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="0-1432617849-1238977897=:8925"
X-RPD-ScanID: Class unknown; VirusThreatLevel unknown, RefID str=0001.0A150202.49D94D6C.0015,ss=1,fgs=0

From top to bottom this header text contains:

  • Return-Path: reply address. Can be spoofed.
  • Received: As messages move through mail servers, a new Received header line is added to the beginning of the headers list. Each of these lines contain tokens, being: (from), (by), (id), (for) and (;)

    So the first received line can be broken down thus:

    from nskntingx07p.mx.bigpond.com ([66.163.178.121])
    by nskntmtas06p.mx.bigpond.com with ESMTP
    id 20090406003140.hnm57.nskntmtas06p.mx.bigpond.com@nskntingx07p.mx.bigpond
    .com>
    for ; Mon, 6 Apr 2009 00:31:40 +0000
This is the information relating to the Bigpond mail server which is the last server in the chain. Note the ip address, there can be multiple Received lines and by checking the dns name in the From and By tokens, against the ip addresses in the chain you can see if someone is spoofing.

  • DKIM-Signature : DomainKeys Identified Mail is a domain level authentication for email using public key cryptography.
  • X-Mailer : sender's mailer software, in this case Yahoo web mail.
Then we have the header lines that are visible in most email messages, being:
  • Date
  • From
  • Subject
Then there are references to the encoding (in this case MIME 1.0) and the content which could be plain/text ascii, but in this case is multipart/alternative, probably due to some coding in the signature as it is from Yahoo.

In summary the information we can glean from this email's header regarding, the user's email, the origin of a message and the path it took is as follows.

The email was written by Jason Radich using Yahoo mail web application. Using Network Tools to check the DNS against the IP address, the origin is confirmed as Yahoo. The message has three Received lines, but only seems to have gone through two mail servers, Yahoo and Bigpond. The first Received line I think is internal at Yahoo, as it doesn't have a full set of Tokens.

Probably went into overkill on this question having been away from my blog for nearly a month, but it seemed to require more investigation than the simplicity of the question implied. Very interesting to look at this metadata.


Friday, March 13, 2009

Module 2 email list moderator

Cynthia asked if I would moderate the groups mail list for Internet studies. After some consideration I agreed, even though I know I am getting to the limit on workload. I don't think it will take up too much time as interest will die off pretty quickly. Plus I am to get 4 more students to mod with me so how hard can it be? Dulcie was the first to volunteer. Yay Dulcie.

I set up a poll which was just a fun thing, and we have had five responses so far.

Click on screenshots for hi res



I also set up an address book, which was to centralise all of our blog urls so that we didn't have to hunt around on the bulletin board.









So far I have worked out how to email the group, individual or owner. I think those are the only options. Mind you this is Yahoo groups, not a tailored or commercial email list.

Monday, March 9, 2009

Module 1: Traceroute - extra credit

Module 1: Traceroute - extra credit

Running windows xp, I am gunked out with bloated software, so passed on exploring internet tools clients. Still want to do the extra couple of log entries though.
So to the task:

  • ping the Blackboard site from my pc, compare the result from the tools site. Evaluate.
  • traceroute from my pc to curtin.edu.au. Compare and evaluate versus results from tools site.
click on screenshots to enlarge

Ping task

screenshot of ping result



copy and paste...

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\Jason Radich>ping curtin.edu.au

Pinging curtin.edu.au [134.7.179.56] with 32 bytes of data:

Reply from 134.7.179.56: bytes=32 time=78ms TTL=109
Reply from 134.7.179.56: bytes=32 time=76ms TTL=109
Reply from 134.7.179.56: bytes=32 time=76ms TTL=109
Reply from 134.7.179.56: bytes=32 time=76ms TTL=109

Ping statistics for 134.7.179.56:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 76ms, Maximum = 78ms, Average = 76ms

This task was ambiguous in its wording as it ask for the Blackboard site to be pinged, when the previous ping task specified curtin.edu.au and they are different addresses. Seeing as we are to compare results, I discarded the Blackboard site requirement and pinged curtin.edu.au again.

This result is what I would expect considering the physical differences in distance between the tools site server and my local machine. My thoughts would be to use services on your own machine or at least a local tools site for this type of research.


Traceroute task

screenshot of trace route



copy and paste of trace route

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\Jason Radich>tracert curtin.edu.au

Tracing route to curtin.edu.au [134.7.179.56]
over a maximum of 30 hops:

1 1 ms 1 ms 1 ms mygateway1.ar7 [10.1.1.1]
2 18 ms 17 ms 16 ms 172.18.209.3
3 17 ms 16 ms 16 ms 172.18.65.182
4 17 ms 16 ms 17 ms 172.18.239.33
5 16 ms 17 ms 16 ms TenGigabitEthernet4-2.ken29.Sydney.telstra.net [
203.45.3.5]
6 18 ms 16 ms 16 ms TenGigE0-1-0-2.ken-core4.Sydney.telstra.net [203
.50.20.1]
7 16 ms mygateway1.ar7 [10.1.1.1] reports: Destination protocol unreachab
le.

Trace complete.

Conclusions

12 less hops, way less time. Physically closer, less nodes. Plus offpeak time here, start of business in USA = less traffic locally.

Eeergh. I am tired.