tleafb

CIS1007 – Thomas Brown

Week 16

This week focused on the basics of programming. Some programming languages were discussed as well as dividing languages into ‘compiled’, ‘interpreted’ and ‘hybrid’. For example C, PHP and Java respectively. These languages could also be defined by whether they are ‘strongly’ or ‘weakly’ typed languages. Where strongly typed languages need more information included in the declaration of variables than that of weakly typed languages. Pseudo code, a style of writing for programming, was also covered. Due to similar conventions found in programming; pseudo code allows a person to write in a language similar to that of programming languages without having to actually know what code it will be written in.

Unfortunately this weeks reading was not possible to complete as the sources were no longer available.

The lab session for this week comprised of a task in which students had to create pseudo code. A paragraph was given that was written in plain English which then had to be converted to pseudo code. On the whole I found this task, along with the week’s focus very interesting. Programming is something I am keen to explore further and as I enjoy Web I felt this was a pleasant convergence for me.

Week 15

This week was focused on writing styles for the Web. This is a useful concept may often be overlooked. Writing styles differ dependant on the medium and the Web is no exception. However, there is also sub sections within the encompassment of the Web. Appropriate writing styles may differ dependant on whether the medium is a business website, social media or for a mobile device.

This weeks reading is comprised of:

  • Bly, R.W. (2009) “Writing for the web”. Target Marketing. 32(8) 14-15.

Nielsen’s article ‘How Users Read on the web’ concentrates on how users “scan” webpages for the content they need and only a very small proportion read the whole text. He explains that an appropriate writing style for the web would have “highlighted keywords”, “meaningful sub-headings” and “bulleted lists”. He also points out that points should separated out and that the word count should be much lower than in non-web mediums.

Nielsen’s more recent article ‘Writing Style for Print vs. Web’ continues in the same vain pointing out that web text must be ‘scannable’. He declares that web writing should have a much higher level of specificity than it’s printed counterpart. He explains that web users want “actionable content” and do not appreciate the anecdotes and filler that make up significant portions of printed text. Another interesting concept that is touched upon is that of E-Learning. Nielsen suggests that the web is not necessarily a good medium for in-depth learning, compared to books and presentations. Finally he explains that web users want to control their experience online, which may be different to more passive experience found in other mediums.

Morkes and Nielsen’s article shows that for optimum usability a web page should have concise, scannable and objective text. Interestingly this article is in fact a print style version of Nielsen’s article How Users Read on the web’. They point out three main features about web users. Firstly they do not read web content in the same way they may in printed media. Instead web users “scan” pages for key fragments of text. Secondly users do not appreciate long webpages that they may have to excessively scroll through. They would prefer more concise content. Thirdly web users appreciate factual information over “marketing fluff”. The article also covers three key studies that were performed in order to obtain information on the aforementioned traits of web users.

I was unfortunately not able to access Bly, R.W. (2009) “Writing for the web”.

This weeks lab session looked at user ‘personas’. These personas are imagined images of the target audience of a website or product. These personas can then be used to tailor the style and content of websites to allow for the most impact on the suited audience.

Week 14

This weeks focus was on Safety and Security on the web. A distinction was drawn between was drawn between the two ideas. Safety could be seen as our personal behaviour online, who we speak to, the settings we have on our browsers and so on. Security on the other hand falls into the more technical aspects such as virus protections, malware, firewalls etc. It is a key principle that we can have all the security we want and still be at risk on the web due to our behaviour. Responsibility of safety therefore falls largely on the individual.

Reading for this week contained:

  • Yellowlees, P.M. & Marks, S. (2007) “Problematic Internet use or Internet addiction?”. Computers in Human Behavior. 23(3) 1447-1453.
  • XueMei, Q. & Hua, N. (2010) “Study on Causes and Strategies of Online Gaming Addiction among College Students”. International Conference on Multimedia Technology. ICMT. 1-4.
  • Adeyinka, O. (2008) “Internet Attack Methods and Internet Security Technology”. Second Asia International Conference on Modeling and Simulation. AICMS 08. 77-82. 
  • Kim, W., Jeong, O.R., Kim, C. & So, J. (2011) “The dark side of the Internet: Attacks, costs and responses”. Information Systems. 36(3) 675-705.

Unfortunately I was not able to access “Problematic Internet use or Internet addiction?” recieving an error message “DOI Not Found.”

XueMei and Hua’s article puts forward the idea of ‘online game addiction’ suggesting that people can become addicted to online gaming and this addiction can be harmful both physically and mentally to the sufferer. The article declares that the number of online gamers has risen from ‘1.25 hundred million’ in 2007 to ‘2.65 hundred million’ by 2009. The article also declares that the ‘rate of yearly Addiction reached 41.5 per cent’ however this figure may be misleading as it does not define what the percentage is of. Is that 41.5% of the population on the world? Or of China? Or internet users? Or online gamers? The article suggests there are many factors governing the reasons behind ‘online game addiction’ from external influence to inherent personality traits. Although it is not entirely clear due to the language of the article it appears that XueMei and Hua suggest that in order to reduce ‘online game addiction’ producers of games should develop ‘their consciousness of social morality’ and remove elements of violence and erotic content. They also suggest that teachers should take a more keen interest in treating ‘online game addiction’ and students should be warned of the dangers associated with online gaming. From a personal stand point I feel that the article does not offer enough of a solid argument to back up the claims and requests that it suggests.

Adeyinka’s article concentrates on different methods of attack through the internet and security systems that help prevent the success of these attacks. The article includes a helpful table which displays the relationships between online attacks and the security technology used to deal with the attacks (Fig 14.1). The article explains some of the attack methods such as viruses, hacking, trojans and worms in further detail. Some technologies used for security are also discussed in detail such as cryptography, firewalls and anti-malware software.

4530455-table-1-large

Fig 14.1

‘The dark side of the Internet’ points out that the internet’s development has been somewhat based on an “ideal world” were all users are harmless entities. Unfortunately however this is not necessarily the case for all users. Attacks can and are performed by more malicious bodies. The article defines these attacks along with exploring the damage implications of them and responses to them. It points out that on and off the internet the world is “inhabited by the same people” and due to which, while unpleasant, it is an inevitable and “natural consequence” that the negative should come along with the good. The article defines possible damages associated with attacks ranging from loss of money and defamation to physical harm. The different attack methods are group into two key categories; the technology-centric and the non-technology-centric. The article concludes that while governments should do more to ensure safety and security online the reality is, just like in the offline world, it may not be possible to completely eradicate unscrupulous behaviour.

In this week’s lab session the class completed a quiz which tested knowledge of HTML and CSS. There was a multiple choice segment along with a practical assessment. I felt I performed well in both aspects and this help me to feel more confident in my knowledge. The next section of coursework was also released this week and is a priority.

Week 13

This weeks focus was on the Web’s current state with focus on Web 2.0.  Web 2.0 being a term used to describe the more dynamic modern websites and web use.

This weeks reading list was made up of:

  • Katajisto, L. (2010) “Implementing Social Media in Technical Communication”. 2010 IEEE International Professional Communication Conference. IPCC. 236-242.
  • Palen, L., Vieweg, S., Liu, S.B. & Hughes, A.L. (2009) “Crisis in a Networked World: Features of Computer-Mediated Communication in the April 16, 2008, Virginia Tech Event”. Social Science Computer Review. 27(4) 467-480.
  • Kim, Y., Sohn, D. & Choi, S.M. (2011) “Cultural difference in motivations for using social network sites: A comparative study of American and Korean college students”. Computers in Human Behavior. 27(1) 365-372.

Katajisto’s article focuses on how social media can be used for Technical Communication/User support. She defines six genres of social media: “1) content creation and publishing, 2) content sharing, 3) social networking, 4) collaborative producing, 5) virtual worlds, and 6) add-ons.” She also defines six types of social media user: “1) creators, 2) critics, 3) collectors, 4) joiners, 5) spectators, and 6) inactives.” She then discusses how Nokia utilises Facebook, Twitter, YouTube, Second Life, discussion board and blogs, although she points out that only some of the social media elements are used for user support. She describes, however, how companies could use social media for user support. For example twitter could be used to inform users of software updates. She also crucial points out that social media may not be appropriate for every situation or user.

The ‘Crisis in a Networked World’ article considers research based on public information sharing and communication at the time of and follow the ‘crisis at Virginia Tech’ on the 16th of April 2007. The article points out that ICT has made it possible to track ‘patterns of communication’. It suggests that social media can be used to find information and for people to help each other in times of disaster. The article suggests that through the use of ‘peer-to-peer’ supported ICT information production is no longer bound by geography.

The ‘Cultural difference in motivations’ article compares the ways in which Korean and American students use social networking and their motivation in doing so. The article looks at how differences in cultures may alter the use and motivation of a social network user. The article find that while the reasons for use of social networks is similar among both cultures some discrepancies occur in the finer detail. The article suggests that Korean students may place more importance on “obtaining social support” while American students place more emphasis on “seeking entertainment.”

The lab session allowed students to explore Dreamweaver in some detail through a step by step guide of building a website. I personally found this useful for learning about the program as previously all my web work had been down without the utilities of Dreamweaver. I believe I will continue to type out my documents in the future but having knowledge of Dreamweaver as a fall back, I feel, is very useful.

Week 11

This weeks main focus was on usability and accessibility of the web. By usability of something we mean how effective and efficient it is to use a long with how easy it is to learn how to use. However, by accessibility we mean how well it’s uses can be used by who may differ from the creator in some way. For example accessibility could be defined by whether it is usable for the partially sighted, hard of hearing, wheelchair users people with colour blindness. Usability and accessibility falls into the subject of HCI, Human Computer Interaction, which explores the interaction between people and computers.

This weeks reading consists of:

  • Nielsen, J. & Faber, J.M. (1996) “Improving System Usability Through Parallel Design”. Computer. 29(2) 29‐35.
  • Nielsen, J. (1997) “Learning From the Real World”. IEEE Software. 14(4) 98‐99.
  • Fang, X. & Holsapple, C.W. (2011) “Impacts of navigation structure, task complexity, and users’domain knowledge on Web site usability – an empirical study”.Information Systems Frontiers. 13(4) 453‐469.

Nielsen and Faber’s article points out that in order to keep up to date with competition, companies must release software hastily. However needs for usability are not always meant as in order to reduce development time prototyping is kept to a minimum and so not all usability issues may be covered before release. ‘Parallel Design’ is proposed as an option to enhance usability while allowing for shorter development time. The principle of Parallel Design is that multiple designers may work an initial version of a project. These individual initial designs will then be merged to create a more robust design. This may be seen as a more time efficient method than a standard linear project development. However it is arguable that it is not very efficient in that many designers have to work separately on the same task.

In Nielsen’s article “Learning From the Real World” he explores how how software design can be relate to the design of physical entities such as phones and cars. Nielsen point’s out that usability issues are not confined only to software but can be seen in everyday objects. He suggests that the number based system for contacting people using a phone is highly inefficient. Nielsen also finds a analogy regarding driving. He suggests that the act of driving a car is not altogether that difficult, however when other road users come into play it becomes an entirely different matter. This he relates to how if the web were made up simply of pages the user needed it would be a fairly efficient and effective system, however the quantity of user and data on the web can make it far less easy to navigate.

Fang and Holsapple’s article points out that a key use of the web is the acquisition of knowledge. However, they argue that not all website’s usability is appropriate for the users to access the information they require. They suggest that this lead to discontent from users. This infers that websites success will be compromised.

In this week’s lab session the students were given the task of critiquing each others work in creating Logos from a previous coursework task. This allowed an insight into how grading is performed and crucially, therefore, how to improve an individuals grade.

Week 10

This week’s focus was on Prototyping. Principles and thought processes behind low to high fidelity prototyping were explored. Depending on the level of fidelity needed; choices can be made about what forms of prototyping can be used. They forms could be mock ups, wireframes or sketches. Very crucial is the concept of a cycling process of prototype, review, refine.

The weekly readings are Still and Morris “The Blank‐Page Technique: Reinvigorating Paper Prototyping in Usability Testing” and Walker, Takayama and Landay “High‐Fidelity or Low‐Fidelity, Paper or Computer? Choosing Attributes when Testing Web Prototypes”.

Still and Morris point out that early and frequent prototyping allows for a much better evaluation process. This reduces the risk of issues that need lengthy overhauls and in turn reduces financial risk. It also suggests that paper prototyping is becoming obsolete as it is too time consuming and does not allow for adequate representations. Medium fidelity prototyping (wireframes) are used in place of low fidelity concepts like paper prototypes. However they suggest that medium fidelity prototyping may lessen user input. Still and Morris perform an experiment were they combine the user involvement of paper prototyping with medium fidelity prototyping. They created a wireframe that when dead links were clicked would lead to a page requesting information regarding what the user would like to see developed in it’s place.

Walker, Takayama and Landay conduct an alternative experiment to Still and Morris. They suggest there is little research into what level of fidelity prototyping results in the most useful feedback from users. They feedback from users having tested low and high fidelity prototypes. Through their experiments they find that both high and low fidelity prototyping result in equally useful feedback. They conclude that the decision of what form of prototyping is to be used should depend on the needs of the developer.

Computing History – Alan Turing

Alan Turing is considered to be the original ‘founder of computer science.’1 Turing was born in London on the 23rd of June 1912 and between 1931 and 1934 he studied Mathematics at King’s College, Cambridge University. His dissertation, in which he proved the central limit theorem, found him elected as a fellow of the University when he graduated. He created the concept of the Turing machine, a hypothetical machine that can calculate any computable algorithm. Studying Mathematics and Cryptology in America, Turing received his Doctorate from Princeton University in 1938. After receiving his Ph.D Turing returned to England and began work at the Government Code and Cypher School, ‘a British code-breaking organization.’2 During World War II he worked full-time in Bletchley Park at the Code and Cypher School headquarters. Here he played a key role in interpreting German messages that had been encrypted using the Enigma machine. Turing was responsible for the ‘Bombe’ an electro-mechanical device created to break the encryption of the Enigma machine. This deciphering supplied significant intelligence to the Allies, helping them to win the war. Following the war Turing began work at the National Physical Laboratory where he notably developed the concept for the Automatic Computing Engine (ACE), an early and revolutionary computer design. His ideas, however, were quashed by his colleagues. In 1949 he took a position at Manchester University and in 1950 he released his paper ‘Computing machinery and intelligence’ which founded the basis of artificial intelligence. The paper outlined an experiment known as the ‘Turing Test’, a method for testing artificial intelligence, which still has influence to this day.

In the England during the 1950s homosexuality was illegal and in January of 1952, after confessing to having a relationship with another man, Turing was arrested. He was charged and given a choice of sentences; either imprisonment or chemical castration through a course of oestrogen injections. Turing opted for hormonal treatment, administered to reduce libido. Despite effectively shortening the war, saving lives and designing the blueprints for modern computing Turing’s security clearance was revoked meaning he could no longer work for the Code and Cypher School, then known as The Government Communications Headquarters (GCHQ). On the 7th of June 1954, aged 41, Turing died of cyanide poisoning. A half eaten apple was found beside his bed and it is considered that it contained the lethal cyanide; however the apple was never tested. Turing’s death was ruled as suicide.

Posthumously Turing has been far more successful than in his lifetime. Not long after his death he was declared an Officer of the Order of the British Empire. There is also a blue English Heritage plaque on his childhood home. Along with these and many other elements of recognition some consider the Apple Inc. logo to be a tribute to Turing, although this has never been confirmed. Andrew Hodges, Mathematician and activist, released his book ‘Alan Turing: The Enigma’ in 1983. The book details the life and achievements of Alan Turing, one of the most crucial figures in the development of computing.

1 Alan Turing: the enigma. 2013. Alan Turing: the enigma. [ONLINE] Available at:http://www.turing.org.uk/. [Accessed 13 December 2013].

2 Alan Turing Biography – Facts, Birthday, Life Story. 2013. Alan Turing Biography. [ONLINE] Available at: http://www.biography.com/people/alan-turing-9512017. [Accessed 13 December 2013].

Web Technologies

Coursework 1

Portfolio Task 1-1

HTML

Hyper Text Markup Language commonly referred to as simply ‘HTML’ is a coding language, arguably the most popular, of the World Wide Web. A HTML document is made up of ‘tags’ and plain text. These tags help describe the document to a web browser in order for the browser to display the document in a structure as defined by the author. For example the HTML tag ‘<br>’ defines a single line break, while the tag ‘<p>’ defines a paragraph. The plain text, on the other hand, making up the information displayed as structed by the tags. HTML was developed in 1990 by Tim Berners-Lee, a scientist working for the organisation CERN. Tim Berners-Lee is a crucial figure in the development of modern technology, along with creating HTML he is also responsible for the World Wide Web. HTML’s specification is maintained by the World Wide Web Consortium (W3C) which too, incidentally, was founded by Berners-Lee.

The principle idea of the Web was that one document could be linked to several other documents, which too may be linked to others making a ‘Web’ of documents. For example, whilst reading a research paper a user could be linked to other pages containing relevant information ‘using some form of hypertext’(1) which may back up an argument or explore a topic in further detail that was touched on in the initial paper. Building on the back of SGML (Standard Generalized Markup Language), which too used tags in it’s code, Berners-Lee created HTML to deal with the needs of the Web. In the early 90’s discussions were held, through a mailing list known as ‘WWW-talk’, regarding the creation of a standard specification for HTML. Berners-Lee gave the responsibility of creating this standard to Dave Raggett, a key figure in the development of web technologies. This brought about a draft known as ‘Hypertext Markup Language, Ver 1.0’ which was released in June 1993. However, the draft ‘expired while the noise continued about solidifying HTML’(2). In 1995 the first official specification standard ‘HTML 2.0’ was released. Within the first incarnations of HTML users were limited to 22 tags but as the popularity of the Web expanded the need for more tags lead to a draft of ‘HTML 3.0’. The browsers of the time were reluctant to implement the changes as they had already begun introducing their own proprietary tags to address the issue. However, the use of browser specific tags has serious restrictions on the users and developers of websites as pages would be viewed differently in different browsers or different codes would have to be made specifically for each browser. Thankfully this issue was resolved, to some degree, and in January 1997 ‘HTML 3.2’ was introduced as the specification standard. This was not without it’s drawbacks though. HTML 3.2 allowed developers to implement style directly into their code. This meant that, for detailed styling, the files became very cumbersome and if an element needed changing it may have to be changed in multiple instances. Instances which may prove to be awkward to find in the masses of code. In order to make HTML documents more usable when ‘HTML 4.0’ was released in December 1997, becoming standard in April 1998, it allowed developers to create their style code externally and link it into their HTML files using a link tag.

 CSS (Cascading Style Sheets) is a topic very closely related to HTML as it can be used to implement style into HTML files. CSS can be written inline or on an internal style sheet in a HTML file. However, code can be written into an external CSS file and then linked to a required HTML file using a HTML link syntax. This is incredibly useful as it allows site-wide changes to be made through altering very small amounts of code.

‘HTML 5’ could be seen as the future direction of HTML. As a joint effort between the W3C and the Web Hypertext Application Technology Working Group (WHATWG) HTML 5 is set to supersede HTML 4 and XHTML. Despite the fact a great deal of the functions of HTML 5 have been introduced into browsers it is still considered a ‘work in progress.’(3) However HTML5 will allow for more standardisation across platforms. HTML5 offers a more seamless and universal experience for users as it deals with problems, such as embedding video into pages, which in the past have had to be dealt with by external plug ins.

1. Chapter 2. 2013. Chapter 2. [ONLINE] Available at:http://www.w3.org/People/Raggett/book4/ch02.html. [Accessed 01 December 2013].

2. HTML 2.0. 2013. HTML 2.0. [ONLINE] Available at:http://www.blooberry.com/indexdot/history/html20.htm. [Accessed 01 December 2013].

3. HTML5 Introduction. 2013. HTML5 Introduction. [ONLINE] Available at:http://www.w3schools.com/html/html5_intro.asp. [Accessed 26 November 2013].

Word Count 756

Bibliography

Week 9

This weeks main focus has been on ‘Information Architecture’. Information Architecture (IA) is the term used to describe the structure in which information is organised. The object of IA is for information to be grouped into categories such that intuitive navigation can allow a user to access the information they need in a manner that the creator deems most effective. There are many structures common to IA, some of which are; linear, web/grid, hub & spoke and hierarchical. Each structure has it’s own strengths and weaknesses. It’s worth noting that structures can be considered ‘broad and shallow’ or ‘narrow and deep’. A broad and shallow structure would have many categories but fewer pieces of information per category whilst a narrow and deep structure would have fewer categories and more information per category. Navigation plays a major role in IA too. There are many methods of navigation such as; search features, site maps and breadcrumbs (such as what you might find on an eBay item showing in what categories and sub-categories the item is listed under).

This weeks reading consists of :

  • Green, G.K. (2001) “Information Architecture”. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 2001. Vol 45 595‐598.
  • Burford, S. (2011) “Complexity and the Practice of Web Information Architecture”. Journal of the American Society for Information Science and Technology. 62(10) 2024‐2037.
  • Nawaz, A. (2012) “A Conceptual Framework of Information Learning and Flow in Relation to Websites’ Information Architecture”. In: Proceedings of the 4th International Conference on Intercultural Collaboration. ICIC’12. New York: ACM.

Green’s article defines Information Architecture as “both science and art.” The article then goes on to define “Bottom-up architecture” and “Top-down architecture”. One concept that appeared interesting is that the article explains that information architecture is “neither completely understood nor particularly well defined” due to the how young it is as a concept. However traditional architecture is ingrained into human existence. The article suggests that lessons can be learnt and principles be applied from that topic. The article also suggested that the architecture and navigation of a website play a key role, alongside the content, in providing the user experience.

Burford’s article explores the outcomes of research into information architecture among large organisations. The article points out that due to HTML websites inherently have some for of structure regardless of whether information architecture has been implemented intentionally. The article goes on to explain how one major struggle of information architecture is that many users may use a website for separate purposes and one form of architecture that suits one user’s needs may not suit another’s. Similarly to Green’s article it declares that despite the wide scope of information architecture in the modern world, there is very little research into the subject. Burford concludes that for the best information architecture to occur in large organisations it should be “characterized by negotiation and compromise.” There are often many different view points and ideas that need to be taken into account.

I was unable to access: Nawaz, A. (2012) “A Conceptual Framework of Information Learning and Flow in Relation toWebsites’ Information Architecture”.

In this weeks seminar session we took a more practical approach to Information Architecture. With group and individual work we looked at lists of information and categorised them along with answering questions based around the topic. It was interesting to find that most groups reach similar conclusions with regards to the categorisation of the information. Almost all groups also fell into the trap of including ambiguous categories. This was useful though as it is now a mistake I will avoid.

In the seminar we also looked over the next piece of coursework detailing what needs to be done. It is key I catch up on the coursework as the deadlines are looming.

Week 8

This week continued the exploration of CSS. Methods were put into to practice in order to display how CSS coding can be used to style HTML pages.

This weeks reading was:

  • Niederst Robbins, J. (2012) “Learning Web Design: A Beginner’s Guide to (X)HTML, Style Sheetsand Web Graphics”. 4th ed. Sebastopol: O’Reilly Media, Inc. Chapters 14‐16

Chapter 14 of ‘Learning Web Design’ covered continued on from the box model concept. The different sections of element boxes were discussed along with modifying dimensions. crucial concepts such as padding, borders and margins were also explored.

Chapter 15 of ‘Learning Web Design’ moved on to discussing how the layout of a webpage can be developed using float and positioning. The chapter discussed floating elements to left and right positions as well how to force an element below a previous element using clear. More exact positioning both relativity and fixed were also explored.

Chapter 16 of ‘Learning Web Design’ worked on creating template page layouts. The chapter lays out the HTML and CSS that can be used to create the templates. There are some more static templates using exact positioning and other templates which can be seen as more ‘liquid’.

Within the lab session a HTML template was manipulated by linking an external CSS file. The group was shown a completed page, in a browser, and then had to go about emulating it through CSS. Despite my lack of experience I felt I was able to follow the task with a little coaching. This was a very useful way for me to understand how CSS (and to some degree HTML) works in practice. I feel it is very important that I improve my knowledge of HTML, as it is in this area of study I feel I have missed out on most in my late arrival. I also need to study Dreamweaver and Fireworks as I have very little experience of either.