Department of Information Technology

    • Information technology (IT) is the application of computers and telecommunications equipment to store, retrieve, transmit and manipulate data, often in the context of a business or other enterprise.

      The term is commonly used as a synonym for computers and computer networks, but it also encompasses other informationdistribution technologies such as television and telephones. Several industries are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, e-commerce and computer services.

      Humans have been storing, retrieving, manipulating and communicating information since the Sumerians in Mesopotamia developedwriting in about 3000 BC, but the term information technology in its modern sense first appeared in a 1958 article published in theHarvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.

    • At the heart of MPGI is the relevance and rigor of its research, teaching and learning materials. The experience and talents of our faculty combine to create world-class research results as well as teaching excellence. The result is top-notch educational programmes, and cutting-edge research that extend the frontiers of knowledge. MPGI's prolific research output both identifies current trends in today's demanding educational environment, and explores principles that guide longer-term success.

      The Faculty – one collaborative environment

      We have a strong emphasis on pooling academic resources and expertise across our Institutions, creating richer undergraduate experiences, new training programs, and a host of new collaborative research opportunities.



      Internet today is a best-effort network, with time-varying bandwith characteristics. A system for multicasting video over the Internet has to deal with heterogeneity of the receiver's capability and/or requirements. So adaptive mechanisms are needed. Real-time multimedia traffic places a number of constraints on the network. Conventionally multi-layered transmission of data is preferred to solve the problem of varying bandwidth in multimedia multicast application. In todays scenario majority of the flows are highly bandwidth consuming and are bursty in nature.

      Consequently we observe sudden changes in available bandwidth, leading to lot of variations in received video quality. Our aim is to minimize these variations. In our approach, throughtout transmission, we have a notion of available bandwidth using ``bandwidth prediction model". This model is refined periodically using feedback from receiver. We also propose to utilize ``startup latency" i.e. the time before playout starts at client,to overcome the problem of bandwidth variation that may arise later.

    • Abstract

      “To generate a summary, i.e. a brief but accurate representation of the contents of a given electronic text document.”

      Researchers and students constantly face this scenario: It is almost impossible to read most if not all of the newly published papers to be informed of the latest progress and when they work on a research project, the time spent on reading literature review seems endless. The goal of this project is to design a domain independent, automatic text extraction system to alleviate, if not totally solve, this problem.

      Without the use of NLP at our disposal, we have scored sentences in the given text both statistically and linguistically to generate a summary comprising of the most important ones obtained so. The program takes input from a text file, and outputs the summary into a similar text file. The most daunting task at hand was to generate an efficient scoring algorithm that would produce the best results for a wide range of text types. The only means to arrive at it was to manually summarize and then evaluate sentences for common traits, which would then be converted into the machine language.


      Our program essentially works on the following logics:

      • a. WORD SCORING

      • 1. Stop Words: These are some insignificant words that are so commonly used in the English language that no text can be created without them. They therefore provide no real idea about the textual theme, and have therefore, been neglected while scoring sentences. Eg. I, a, an, of, am, the, et cetera.
      • 2. Cue Words: These are words usually used in concluding sentences of a text, making sentences containing them crucial for any given summary. Cue Words provide closure to a given matter, and have therefore, been given prime importance while scoring sentences. Eg. Thus, hence, summary, conclusion, et cetera.
      • 3. Basic Dictionary Words: 850 words of the English language have been defined as the most frequently used words that add meaning to a sentence. These words form the backbone of our algorithm, and have been vital in the creation of a sensible summary. We have hence, given these words moderate importance while scoring sentences.
      • 4. Proper Nouns: Proper Nouns in most cases form the central theme of a given text. Albeit, the identification of proper nouns without the use of linguistic methods was difficult, we have been successful in identifying them in most cases. Proper Nouns provide semantics to the summary, and have therefore been given high importance while scoring sentences.
      • 5. Keywords: The user has been given an option to get a summary generated which contains a particular word, the keyword. Though this is greatly limited by the absence of NLP, we have tried our best to produce results.
      • 6. Word Frequency: Once basic scores have been allotted to words, their final score is calculated on the basis of their frequency of occurrence in the document. Words in the text which are repeated more frequently than others contain a more profound impression of the context, and have therefore been given a higher importance.


      • 1. Primary Score: Using the above methods, a final word score is calculated, and the sum of word scores gives a sentence score. This gives long sentence a clear advantage over their smaller counterparts, which might not necessarily be of lesser importance.
      • 2. Final Score: By multiplying the score so obtained by the ratio “average length / current length” the above drawback can be nullified to a large extent, and a final sentence score is obtained. The most noteworthy aspect has been the successful merger of frequency based and definition based categorization of words into one efficient algorithm to generate an as complete as possible summary for a given sensible text.

    • Abstract

      This report builds approaches towards evaluating lexico-semantic networks by studying evaluation strategies applied to ontologies. It shows the lack of such methods for networks such as WordNet, and so builds a case for such evaluations. A brief introduction to lexico-semantic networks, a mention of the principles of evaluation and the successes in Machine Translation evaluation are also included in this report.

      Lexico-semantic networks such as WordNet have burst into prominence be cause applications, especially those targeted for the Web, now aim to en hance the semantic dimensions of their performance. An example of such an application is in Information Retrieval where a lexical resource can help provide easy query keyword disambiguation and improve the quality of search results retrieved. This is especially due to the fact that the quantity of documents now available via the Web is extremely large, resulting in the nee for further sophistication. Alternatively, consider automatic generation of content for certain contexts such as tourist phrasebooks, or automatic sensing of emotion from text.

      Lexical resources that can potentially reveal, generate or help infer such content are being developed by various research groups. These are no longer simple dictionaries; rather they are rich in \semantic content" going far beyond the scope of mere lexicons. Lexico-semantic networks can also be viewed as a reservoir of common sense concepts arranged ontologically, hence describing the real-world through lexical knowledge. The bottomline is that such resources are being increasing co-opted in applications involving language technology, and not just in English. Almost every major language now has a WordNet project, and efforts such as ConceptNet attempt to include those aspects not covered by WordNet.

      The increasing production of such networks and their application in diverse areas call for evaluation methods to describe the quality of rival networks as well to set expectations about their likely performance in applications. This covers a gamut of criteria, which unfortunately have not been studied in detail. This report sets the stage for an investigation into evaluation strategies for lexico-semantic networks.

© Copyright 2013

Offices Kanpur, Lucknow, New Delhi, Jaipur