Monday 25 February 2013

EMERGING TECHNOLOGIES



WEEK 5








NANOTECHNOLOGY






Nanotechnology is the manipulation of matter on an atomic and molecular scale. Generally, nanotechnology works with materials, devices, and other structures with at least one dimension sized from 1 to 100 nano metres. It is very diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale. Nanotechnology entails the application of fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, energy production and many others.



Some things that become will become practical with mature Nanotechology in the future are:

  • Allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. 
  • Trousers and socks have been infused with nanotechnology so that they will last longer and keep people cool in the summer. 
  • Bandages are being infused with silver nano particles to heal cuts faster.
  • Cars are being manufactured with nano materials so they may need fewer metals and less fuel to operate in the future.
  • Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology.
  • Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the general practitioner's office and at home
  • Nearly free consumer products
  • Safe and affordable space travel
  • Reintroduction of many extinct plants and animals
  • No more pollution and automatic cleanup of existing pollution
  • Virtual end to illness, aging, death

On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials and their potential effects on global economics, as well as speculation about various doomsday scenarios. Therefore there is a need for governments n companies to protect the environment and make sure pollution is not in place to make the environment better and avoid such effects as global warming.





GRID COMPUTING



Grid computing (or the use of a computational grid) is applying the resources of many computers in a network to a single problem at the same time. Usually it's applied to a scientific or technical problem that requires a great number of computer processing cycles or access to large amounts of data. A well-known example of grid computing in the public domain is the ongoing SETI (Search for Extraterrestrial Intelligence) Home project in which thousands of people are sharing the unused processor cycles of their PCs in the vast search for signs of "rational" signals from outer space. According to John Patrick, IBM's vice-president for Internet strategies, "the next big thing will be grid computing."


Grid computing requires the use of software that can divide and farm out pieces of a program to as many as several thousand computers. Grid computing can be thought of as distributed and large-scale cluster computing and as a form of network-distributed parallel processing. It can be confined to the network of computer workstations within a corporation.


A number of corporations, professional groups, university consortium, and other groups are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Described as a distributed resource management (DRM) tool, Grid Engine allows engineers at companies like Sony and Synopsys to pool the computer cycles on up to 80 workstations at a time. (At this scale, grid computing can be seen as a more extreme case of load balancing.)


Grid computing appears to be a promising trend in that: 

Its ability to make more cost-effective use of a given amount of computer resources, as a way to solve problems that can't be approached without an enormous amount of computing power. 
It also shows that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. In some grid computing systems, the computers may collaborate rather than being directed by one managing computer.



 Application of Grid Computing 

Likely areas for the use of grid computing will be pervasive computing applications, for instance those in which computers pervade our environment without our necessary awareness. Some application areas are:

  • Government
  • In Health Maintenance Organizations
  • Computational Market Economy
  • Electric Power Grids
  • Research application
  • Academic organisations






QUANTUM COMPUTING

A quantum computer is a computation device that makes direct use of quantum mechanical phenomena, such as superposition and entanglement, to perform operations on data. First proposed in the 1970s, quantum computing relies on quantum physics by taking certain quantum physics properties of atoms or nuclei that allow them to work together as quantum bits, or qubits, to be the computer's processor and memory. By interacting with each other while being isolated from the external environment, qubits can perform certain calculations exponentially faster than conventional computers.

Qubits do not rely on the traditional binary nature of computing. While traditional computers encode information into bits using binary numbers, either a 0 or 1, and can only do calculations on one set of numbers at once, quantum computers encode information as a series of quantum-mechanical states such as spin directions of electrons or polarization orientations of a photon that might represent a 1 or a 0, might represent a combination of the two or might represent a number expressing that the state of the qubit is somewhere between 1 and 0, or a superposition of many different numbers at once. A quantum computer can do an arbitrary reversible classical computation on all the numbers simultaneously, which a binary system cannot do, and also has some ability to produce interference between various different numbers. By doing a computation on many different numbers at once, then interfering the results to get a single answer, a quantum computer has the potential to be much more powerful than a classical computer of the same size. In using only a single processing unit, a quantum computer can naturally perform myriad operations in parallel.

Quantum computing is not well suited for tasks such as word processing and email, but it is ideal for tasks such as cryptography and modeling and indexing very large databases.







SEMANTIC WEB


The Semantic Web is the extension of the World Wide Web that enables people to share content beyond the boundaries of applications and websites. It has been described in rather different ways: as a utopic vision, as a web of data, or merely as a natural paradigm shift in our daily use of the Web. Most of all, the Semantic Web has inspired and engaged many people to create innovative semantic technologies and applications. semanticweb.org is the common platform for this community.
Semantic Web aims at converting the current web dominated by unstructured and semi-structured documents into a "web of data." It provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. 

The main purpose of the Semantic Web is driving the evolution of the current Web by enabling users to find, share, and combine information more easily. Humans are capable of using the Web to carry out tasks such as finding the Estonian translation for "twelve months", reserving a library book, and searching for the lowest price for a DVD. However, machines cannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that can be readily interpreted by machines, so machines can perform more of the tedious work involved in finding, combining, and acting upon information on the web. The Semantic Web is regarded as an integrator across different content, information applications and systems. It has applications in publishing, blogging, and many other areas.






















However, semantic web has some limitations such as:

  1. Vastness: The World Wide Web contains many billions of pages therefore any automated reasoning system will have to deal with truly huge inputs.
  2. Vagueness: These are imprecise concepts like "young" or "tall". This arises from the vagueness of user queries, of concepts represented by content providers.
  3. Uncertainty: These are precise concepts with uncertain values. For example, a patient might present a set of symptoms which correspond to a number of different distinct diagnoses each with a different probability.
  4. Inconsistency: These are logical contradictions which will inevitably arise during the development of large ontologies.
  5. Deceit: This is when the producer of the information is intentionally misleading the consumer of the information.




References:

http://crnano.org/whatis.htm

http://searchdatacenter.techtarget.com/definition/grid-computing

http://www.webopedia.com/TERM/Q/quantum_computing.html
http://semanticweb.org/wiki/Main_Page













Wednesday 6 February 2013

HANDLING COMPLEXITY BY CEOs

WEEK 4








This week looked into the issue of "Complexity in information technology." complexity refers to an emergent property of systems made of large numbers of self-organizing agents that interact in a dynamic and non-linear way and share a path dependent history. We learnt that it is actually divided into three main aspects: 


Technical
Organizational
Societal


How an organisation manages the increasing complexity, rate of change and how to meet the demands of its customers quicker and efficiently? has become a nightmare to many CEOs. Therefore, an organisation needs to come up with enterprise architecture which is a tool for managing which facilitates the above challenges rather than depending on trial and error which is risky, costly and catastrophic. As its said. " failing to plan is planning to fail." For CEOs and their organizations, avoiding complexity is not an option — the choice comes in how they respond to it. Will they allow complexity to become a stifling force that slows responsiveness, overwhelms employees and customers, or threatens profits? In the 2010 IBM Global CEO Study this issue was discussed and looked into depth. It came to their attention that the most successful organizations are using entirely new approaches to
tap new opportunities and overcome the challenges to growth. Many  findings arose:




1.The vast majority of CEOs anticipate even greater complexity in the future, and more than half doubt their ability to manage it. But those with a good enterprise architecture are always successful and have turned increasing complexity into financial advantage. This is  made possible due to thurough planning and intergration of various models to cope up with the increasing change in the industry.

2. CEOs believe creativity is the most important leadership quality. Creative leaders are always  innovative  in how they lead and communicate and therefore allowing new experiments in the organisations which facilitates future inventions. Therefore this makes it easier to cope up with customers and new trends in the market. Capitalizing on complexity embody creative leadership, reinvent customer relationships
 build operating dexterity

3. The most successful organizations co-create products and services with customers, and integrate customers into core processes. Customer feedback is very important in helping to improve quality, getting new ideas, assisting customers as well as retaining old customers and attracting new ones. Most organisations have customer care desk which receives customer queries as well as promoting products. Engagement and co-creation with customers produce differentiation. They consider the information explosion immensely valuable in developing deep customer

4.Better performers manage complexity on behalf of their organizations,
customers and partners. They need to simplify operations and products as well as change the manner of handling and accessing resources and gather new markets around the world. Increasing performance brings about competition with other products and therefore attracting many customers.

5. Lastly, CEOs need to demonstrate good leadership qualities.An autocratic leader is always opposed and has no good relationship with his / her workmates. In an uncertain and volatile world, CEOs realize that trumps other leadership characteristics. Creative leaders are comfortable with ambiguity and experiment to create new business models. They invite disruptive innovation, encourage others to drop outdated approaches and take balanced risks. They are open-minded and inventive in expanding their management and communications styles, in order to engage with a new generation of employees, partners and customers. his develops a harmonious working environment that would yield the success of the organisation in present and future.


Responding and handling complexity is a vital issue in an organisation.It can determine the way forward of the business as well as how to cope up in a tough market. CEOs need to be flexible and very aware of the ongoing trends and changes in the market. They should be able to batch updates and applications. From there they can remediate the complexity by focusing on the people, process, and technology associated with the underlying activities.They need to have full access to all the information. CEOs need to be equipped with new knowledge and skills that will enable them to face current issues without using outdated methods.









REFERENCES:

IBM Global Business Services USA(May 2010), Retrieved on 6th Feb,2013 from http://ibm.com/capitalizingoncomplexity


























Monday 4 February 2013

IT SYSTEMS MODEL

WEEK 3






This week we learnt about the IT systems model. IT system modelling is a technique to express, visualize, analyse and transform the architecture of a system. We looked into various models and discussed some.


User Centered Design (UCD)

User-centered design (UCD) is a type of user interface design and a process in which the needs, wants, and limitations of end users of a product are given extensive attention at each stage of the design process

To my opinion, Facebook has the best user centered design.

This is because of the following principles:


It is designed for the users and their tasks therefore it is widely used
Fcabook is also consistent
Use simple and natural dialogue. This is facilitated by simple language tabs that are easier to operate therefore anyone can use. Nowadays Facebook is also in different languages and therefore one uses according to the language which sits him / her best. Facebook also provides adequate feedback on the  " Help'' button and therefore any querry is responded effectively.
It also provides adequate navigation mechanisms by shifting into various links and pages, games and others.It also present information clearly in a display window where one can see all the details clearly and therefore reduces errors by the users.








USABILITY

Usability is a measure of the interactive user experience associated with a user interface, such a website or software application. A user-friendly interface design is easy-to-learn, supports users’ tasks and goals efficiently and effectively, and is satisfying and engaging to use. According to Jeffrey Rubin, usability objectives are:



Usefulness
Product enables user to achieve their goals - the tasks that it was designed to carry out and/or wants needs of user.

Effectiveness (ease of use)
Quantitatively measured by speed of performance or error rate and is tied to a percentage of users.

Learnability
User's ability to operate the system to some defined level of competence after some predetermined period of training. Also, refers to ability for infrequent users to relearn the system.

Attitude (likeability)
User's perceptions, feelings and opinions of the product, usually captured through both written and oral communication

An interface’s level of usability can be measured by inviting intended users of the system to participate in a usability testing session. During a usability test session, a user is given a series of tasks to complete by using the system in question, without any assistance from the researcher. The researcher records user behaviors, emotional reactions, and the user’s performance as the he attempts to accomplish each task. The researcher takes note of any moments of confusion or frustration that the user experienced while trying to complete a task, and also tracks whether or not the user was able to satisfactorily complete each task. Analysis of data from several users provides User Experience Engineers a means of recommending how and where to re-design the interface in order to improve its level of usability and thus, the user experience in general.




Why is Usability Important?

From the user’s perspective, usability is important because it can make the difference between performing a task accurately and completely or not, and enjoying the process or being frustrated.
From the developer’s perspective, usability is important because it can mean the difference between the success or failure of a system.
From a management point of view, software with poor usability can reduce the productivity of the workforce to a level of performance worse than without the system. In all cases, lack of usability can cost time and effort and can greatly determine the success or failure of a system. Given a choice, people tend to buy systems that are more user-friendly.







References:

Foraker Labs(2002-2013), Usability U First, 5th February, 2013 from http://www.usabilityfirst.com










TOPICS IN COMPUTING DISCIPLINES

WEEK 2

This week covered the computing disciplines which are: Computer science, computer engineering, software engineering, information systems, cognitive science.



Computer Science 



CS is the scientific and practical approach to computation and its applications. Related topics are:

  • Algorithms and data structures
  • Artificial intelligence
  • Communications and Security`
  • Computer architecture
  • Computer graphics
  • Concurrent, parallel, and distributed systems
  • Databases
  • Programming languages and compilers
  • Scientific computing
  • Software engineering
  • Theory of computation



Computer Engineering 



Computer hardware engineers research, design, develop, test, and oversee the installation of computer hardware and supervise its manufacture and installation. 
Hardware refers to computer chips, circuit boards, computer systems, and related equipment such as keyboards, modems, and printers


Software Engineering 





Focuses on large-scale software systems; employs certain ideas from the world of engineering in building reliable software systems.Various kinds of software like software for operating systems and network distribution, and compilers, which convert programs for execution on a computer, are developed by a software engineer.Software engineering is the elite version of IT where an understanding of the underlying hardware, electronics, and physics is required to assess that the resulting product will not only meet functional requirements, but also meet timing, safety, reliability, security and fault tolerance requirements 


Information Science





Mainly involves computing Application dealing with (sender -> Lines -> Receiver).Setting up networks. Applied mainly in:
The car systems analysis, project management, database administration, network management, and other management fields. It has the responsibility to track new information technology and assist in incorporating it into the organization's strategy, planning, and practices



Cognitive Science 





Cognitive science is the interdisciplinary scientific study of the mind and its processes. It examines what cognition is, what it does and how it works. It's the Study of mind. Involves cooking  information and includes intelligence & Behaviors researches
Related to other fields. Also careers involved in neuroscience, biotech, pharmaceuticals, and others. Related topics are:
  • Psychology
  • Linguistics
  • Philosophy
  • NeuroScience
  • Computer Science



Differences among these computing disciplines


CS topics were fairly diversified, with an emphasis on Computer, Problem domain , and Systems/software
concepts. The major CS subcategories were Inter computer communication and Hardware
principles/architecture while Problem domain was almost entirely about Computer graphics/pattern analysis,  programming languages, and Methods/techniques.

SE focused primarily on Systems/software and Systems/software management concepts. .SE subcategories were methods/techniques  and tools, while Systems/software management was largely about measurement . 


IS focused heavily on Organizational concepts with Systems/software management and Systems/software concepts. IS subcategories
within organizational concepts were usage/operation  and technology transfer.  IS also focused on the information Systems problem domain (for example, decision support or group support systems) within the category of Problem domain-specific concepts.

These disciplines tend to overlap in topic and are somehow related to each other in that a professional in one of the field can specialize in the other. It therefore gives a wide range of career paths and choices in many  industrial and organisational set ups and this is the advantage of studying in  the ICT field.




References:


1.Geist, R., Chetuparambil, M., Hedetniemi, M., and Turner, A.J. Computing
research programs in the U.S. Commun. ACM 36, 12 (Dec. 1996).
2.Glass, R.L. A comparative analysis of the topic areas of computer science,
software engineering, and information systems. Journal of Systems and Software
(Nov. 1992).
3.Glass, R.L. and Chen, T.Y. An assessment of systems and software engineering
scholars and institutions. Journal of Systems and Software 59, 1 (Oct.
2001). (Published annually since 1994.)
4.Impagliazzo, J. and Gorgone, J.T. Professional






“Most CS people laugh at MIS/IT people,”
and “MIS/IT people make more money and manage the CS folks.”


Well, I agree with this statement, first, if we observe in any organization, most of them have  IT Department, not CS Department. This is because IT is more involved in application of day to day activities in different organisations. Most organizations need IT people to use the program on daily bases unlike the CS people who would develop a software when the need arises. Likewise, IT people also are able to create the program since it involves a broader view in studying both IT and CS although  not as much as CS people. It can't be denied that CS is the mother of computing discipline, every career need their product like software engineering, information system and so on.


Furthermore, nowadays, people use IT to do business whereby IT helps people to earn more money. IT people can help the company to increase the effectiveness to do the work and getting more money. Furthermore, many IT people have highest position in company such as CEO, manager etc. Therefore, they can manage the CS folks.


The most important thing is that,  CS people and IT people are related to each others. CS people are creating the programs, and soft wares while IT people are concerned in the application of such software programs as well as training how to use them in the real fields.