代写代考 Week 3 Lecture Question Solutions

Week 3 Lecture Question Solutions
Professor Yuefeng Li
School of Computer Science, Queensland University of Technology (QUT)
1. Processing Text From Words to Terms and Text Statistics

Copyright By PowCoder代写 加微信 powcoder

The first step for text analysis is to manipulate the text in some way to conduct information retrieval or text analysis. Test processing is a way to find useful index ‘term’ or text features from words.
There are several issues that you need to consider when you decide the terms. The first one is case sensitivity since people normally do not want a search function that can only do exactly matching. For example, most search engines do not distinguish between uppercase and lowercase letters. The second one is that not all words are of equal value in a search or text analysis. Usually, tokenization is used to split words. Some words, or call them stopping, may be ignored entirely. We may also use stemming to allow similar words (like “run” and “running”) to have the same semantic meaning. For Web documents, such as web pages, may include formatting information (like bold or large text), explicit structure (like XML tags: , <chapter>, and <text>), and/or hyperlinks.<br /> Text processing may be profound to the results of text analysis. These text processing techniques may be simple, but it can take a lot of time for data engineers to produce high quality terms or text features. Understanding the statistical nature of text is fundamental to understanding the meaning of high-quality terms or text features.<br /> Statistical models of word occurrences are very important in IR (Information Retrieval). One of the interesting observations is that the distribution of word frequencies is very skewed. That means there are a few words that have very high frequencies and many words that have low frequencies.</p> <p>Question 1. Please open a text or an XML file (e.g., 6146.xml) and represent it as a list of paragraphs or sentences, text. You may remove any non-relevant information (e.g., ‘</p> <p>’, ‘</p> <p>’, ‘\n’). After that, you need to find all terms and their frequencies (the number of occurrences in the file) if their length > 2, represent them using a dictionary, doc; and print the number of total terms (e.g., 137). Then print the top-10 terms in doc in a descending order, e.g.,<br /> [(‘the’, 8), (‘technical’, 2), (‘bounce’, 2), (‘said’, 2), (‘and’, 2), (‘not’, 2), (‘due’, 2), (‘rose’, 2), (“argent ina’s”, 2), (‘argentine’, 1)]<br /> At last, please plot the distribution of the top-10 terms (see, Fig 1 for example).<br /> Fig 1. Example of distribution of top-k frequent words.</p> <p> For a very large data collection or corpus, Zipf’s law is<br /> possible true to predict the relationship between the size of<br /> vocabulary and the size of the corpus.</p> <p>Document Parsing<br /> Document parsing is to represent the content and structure of text documents. The primary content of most documents is the words.<br /> Recognizing each word occurrence in a text document is called tokenizing or lexical analysis. Apart from these words, there may be other types of content, such as metadata, images, graphics, code, tables or hyperlinks; where metadata is information about a document that is not part of the text content. Metadata content includes document attributes such as date and author, and, most importantly, the tags that are used by markup languages, such as HTML or XML.<br /> The parser uses the tags and other metadata recognized in the document to interpret the document’s structure based on the syntax of the markup language (syntactic analysis) and to produce a representation of the document that includes<br /> both the structure and content.<br /> Text pre-processing usually including the following components:<br /> • Tokenizing–istheprocessofformingwordsfromthe<br /> sequence of characters in a document.<br /> • Stopping-wordsthathavelittlemeaningapartfromother<br /> • Stemming-capturestherelationshipsbetweendifferent<br /> variations of a word.<br /> • Findingphrasesorn-grams–theyareusefulinIRandtext<br /> Question 2. Which of the following is FALSE? and explain why it is FALSE.<br /> (1) Stemming is a component of text processing that captures the relationships between different variations of a word.<br /> (2) Stemming reduces the different forms of a word that occur because of inflection (e.g., plurals, tenses) or derivation (e.g., making a verb into a noun by adding the suffixation) to a common stem.<br /> (3) In general, using a stemmer for search applications with English text produces a small but noticeable improvement in the quality of results.<br /> (4) A dictionary-based stemmer uses a small program to decide whether two words are related, usually based on knowledge of word suffixes for a particular language.<br /> Solution: (4)</p> <p>A dictionary-based stemmer has no logic of its own, but instead relies on pre-created dictionaries of related terms to store term relationships. In contrast, an algorithmic stemmer uses a small program to decide whether two words are related, usually based on knowledge of word suffixes for a particular language.<br /> Question 3. (N-grams)<br /> Typically, n-grams are formed from overlapping sequences of words., i.e. move n-word “window” one word at a time in a document. For example, bigrams are 2 words sequences, and trigrams are 3 words sequences.<br /> The definition of Tropical fish is described in the following document:<br /> fish tropical<br /> Please design a python program to print all bigrams and trigrams of the above document that contain at least one of the highlighted key words (‘fish’, ‘tropical’, ‘freshwater’, ‘saltwater’, ‘aquariums’).<br /> Tropical fish are generally those<br /> found in aquatic<br /> environments around the<br /> world, including both freshwater and saltwater species. Fishkeepers often keep tropical<br /> fish in freshwater and saltwater aquariums.</p> <p>2. Information Extraction<br /> Information extraction is a language technology that focuses on<br /> extracting structure from text, e.g., Identifying noun phrases,<br /> titles, or even bolded text.<br /> Most of the recent research in information extraction has been<br /> concerned with features that have specific semantic content,<br /> such as named entities, relationships, and events.<br /> Named entity<br /> A named entity is a word or sequence of words (such as people’s<br /> names, company or organization names, locations, time and date<br /> expressions, quantities, and monetary values) that is used to<br /> refer to something of interest in a particular application,<br /> the process of recognizing them and tagging them in text is<br /> sometimes called semantic annotation.<br /> Example of tagging information<br /> , who lives at 10 Water Street, Springfield, MA, is a<br /> long!time collector of tropical fish.<br /> <PersonName><br /> <GivenName> Fred </GivenName><br /> <Sn> Smith </Sn><br /> </PersonName>, who lives at<br /> <Street >10 Water Street</Street>,<br /> <City>Springfield</City>,<br /> <State>MA</State></address> <p>, is a long!time collector of<br /> <b>tropical fish.</b><br /> Two main approaches have been used to build named entity recognizers: rule based and statistical.<br /> A rule-based recognizer uses one or more lexicons (lists of<br /> words and phrases) that categorize names, e.g., people’s names<br /> (given names, family names). In many cases, however, rules or<br /> patterns are used to verify an entity name or to find new<br /> entities that are not in the lists. For example, new person<br /> names could be recognized by rules such as “<title> <name>”,</p> <p>where <title> would include words such as “Prof.”, “Mr.”, and<br /> A statistical entity recognizer uses a probabilistic model of<br /> the words in and around an entity, e.g., Hidden Markov Model<br /> (HMM) approach.<br /> Hidden Markov Model<br /> In nature language, words can have many different meanings,<br /> e.g., “Marathon” could be the name of a race or a location in<br /> Human beings tell the meaning of a word based on the context of the word, meaning the words that surround it. For instance, “Boston Marathon”, Marathon is preceded by Boston, the text is almost certainly describing a race.<br /> The context of a word can be described by modeling the generation of the sequence of words in a text as a process with the Markov property, meaning that the next word in the sequence depends on only a small number of the previous words, where a Markov Model describes a process as a collection of states with transitions between them; and each of the transitions has an associated probability.<br /> Question 4. (Markov chain)<br /> Fig 2. Markov chain example.<br /> Assume when John is sad today, which isn’t very usual: he either goes for a run, gobbles down<br /> ice cream or takes a nap next day. The Markov Chain depicted in the following state<br /> diagram has 3 possible states: “Sleep”, “Run”, and “Ice Cream”.</p> <p>The following is the corresponding transition matrix<br /> (1) According to the above diagram, if John spent sleeping a sad day away, what is the probability of (2)<br /> Then the transition matrix for the states is represented as a list of lists in python [[“SS”,”SR”,”SI”], [“RS”,”RR”,”RI”], [“IS”,”IR”,”II”]]<br /> Please write the corresponding transition matrix for probabilities in a list of lists.<br /> (2) [[0.2,0.6,0.2], [0.1,0.6,0.3], [0.2,0.7,0.1]]<br /> Although Markov models can be used to generate new sentences, they are more commonly used to recognize entities in a sentence.<br /> For example, for the following sentence:<br /> , who lives at 10 Water Street, Springfield, MA, is a long!time collector of tropical fish.<br /> Based on training data, the recognizer may find the sequence of<br /> <start><person><not-an-entity><location><not-an-entity><end> to have the highest probability for that model.<br /> Then, the words that were associated with the entity categories<br /> in this sequence would then be tagged.<br /> he will likely go for a run next day?<br /> The transition matrix will be 3 ́ 3 matrix. To simple represent the matrix, we can use the<br /> following variables, for example,<br /> SS – SleepàSleep<br /> SR – SleepàRun<br /> SI – SleepàIce Cream<br /> Solutions:</p> <p>The key aspect of the above approach to entity recognition is<br /> that the probabilities in the sentence model must be estimated<br /> from training data. We can generate training data that consists<br /> of text manually annotated with the correct entity tags.<br /> From this training data, we can work out the probability of<br /> words associated with a given category (i.e., output<br /> probabilities), and the probability of transitions between<br /> categories.<br /> Please note we can only observe a sentence. The underlying states (entity categories, e.g., “location”, or “person”) are hidden. Fig 3 shows a Hidden Markov Model (diagram). It assumes a patient has two states:<br /> Fig 3. An example of Hidden Markov Model<br /> The observations a hidden<br /> More formally, we have the following inputs and the output for<br /> finding the most likely sequence of states in an HMM.<br /> “Healthy” and “Fever”. A doctor<br /> cannot see the (hidden) states directly; but the patient can<br /> tell the doctor that she/he is “normal”, “cold”, or “dizzy” (the<br /> observations).<br /> (e.g., normal, cold, dizzy) along with<br /> state (e.g., healthy, fever) form a hidden Markov model</p> <p> 3. Document Structure and Markup<br /> For web search, queries usually do not refer to document structure or fields, but that does not mean that document structure is unimportant. Some parts (e.g., the main heading for the page, and the anchor text for links) of the structure of web pages, indicated by HTML markup, are useful. The document parser must recognize this structure and make it available for indexing.<br /> Example: HTML source for a Wikipedia page<br /> https://en.wikipedia.org/wiki/Tropical_fish<br /> <meta name="keywords" content="Tropical fish, Airstone, Albinism, Algae eater, Aquarium, Aquarium fish feeder, Aquarium furniture, Aquascaping, Bath treatment (fishkeeping),Berlin Method, Biotope" /><br /> <title>Tropical fish – Wikipedia, the free encyclopedia

Tropical fish

Tropical fish include fish found in tropical environments around the world, including both freshwater and salt water species. Fishkeepers often use the term

tropical fish to refer only those requiring fresh water, with saltwater tropical fish referred to as marine fish.

Tropical fish are popular aquarium fish , due to their often bright coloration. In freshwater fish, this coloration typically derives from iridescence, while salt water fish are generally pigmented.


The above HTML source shows more structure, where each field or element in HTML is indicated by a start tag (such as

) and an optional end tag (e.g.,

).
Elements can have attributes (with values), given by attribute_name = ”value” pairs. The element contains metadata that is not displayed by a browser. The metadata element for keywords ( metadata element gives the title for the page (which is different from the main heading).
The element of the document contains the content that is
displayed.
The main heading is indicated by the

tag. Other headings,
of different sizes and potentially different importance, would
be indicated by

through

tags. Terms that should be
displayed in bold or italic are indicated by and
Links, such as fish, are
very common. They are the basis of link analysis algorithms such
as PageRank, but also define the anchor text. Links and anchor
text are of particular importance to web search.
Question 5.
Design a python program to extract all hyperlinks (or destination links) in a html file. You can use HTMLParser python package.

Appendix: Hidden Markov Model (Optional)
Question 4. (Markov chain) continue. (3)
Start state: Sleep
Possible states: [‘Sleep’, ‘Run’, ‘Run’]
End state after 2 days: Run
Probability of the possible sequence of states: 0.3
(Optional) Design a function to forecast the state after k days. For example, you may
assume the
start state is ‘Sleep’. You can also
the “numpy.random.choice” to generate a
random sample from the set of transitions for each day. You may have the following sample

The Viterbi algorithm, a dynamic programming algorithm, can be used to find the most likely sequence of states in an HMM. The algorithm is described as follows:

The calculation of probabilities in VITERBI can be understood as the chain rule. For two random variables x and y to find the joint distribution, we can apply the definition of conditional probability to obtain:
P(x, y) = P(y|x)*P(x)
Let Pr(i, y1) = pi*Biy1 be the probability that y1 occurs from stage i, where pi is the initial probability that stage i is true. It is obvious that T1[i,1] = Pr(i, y1).
Let Pr(k, i, yj) be the probability that yj occurs from stage i after we observed yj-1, and state i is the transition from stage k, i.e., Pr(k, i, yj) = T1[k,j-1]*Aki*Biyj
That is T1[i, j] = maxk(Pr(k, i, yj). For example, T1[i, 2] = maxkÎ[1..K](Pr(k, i, y2). Fig 4 shows the different transition paths (for kÎ[1..K]) to stage i for the observation y2.

T1[k,1] 2
Fig 4. The different transition paths (Aki) to stage i.
For the example in Fig 3, we can also represent the inputs as
the following python definitions.
obs = (“normal”, “cold”, “dizzy”) # assume Y = O here states = (“Healthy”, “Fever”)
start_pi = {“Healthy”: 0.6, “Fever”: 0.4}
trans_A = {
“Healthy”: {“Healthy”: 0.7, “Fever”: 0.3},
“Fever”: {“Healthy”: 0.4, “Fever”: 0.6},
emit_B = {
“Healthy”: {“normal”: 0.5, “cold”: 0.4, “dizzy”: 0.1},
“Fever”: {“normal”: 0.1, “cold”: 0.3, “dizzy”: 0.6},
VITERBI Algorithm first initializes T1 and T2, where K=2,
y1=normal, y2=cold, y3=dizzy; for the first observation y1, we
T1[1,1] = 0.6*0.5 = 0.3
T1[2,1] = 0.4*0.1 = 0.04
T2[1,1] = 0
T2[2,1] = 0
For each rest observations:
For y2, j=2, i=1, we have
T1[1,2] = max_{k=1,2}(0.3*0.7*0.4, 0.04*0.4*0.4)= 0.084
T2[1,2] = 1 (k=1)

For y2, j=2, i=2, we have
T1[2,2] = max_{k=1,2}(0.3*0.3*0.3, 0.04*0.6*0.3)= 0.027
T2[2,2] = 1 (k=1)
For y3, j=3, i=1, we have
T1[1,3] = max_{k=1,2}(0.084*0.7*0.1, 0.027*0.4*0.1)=
T2[1,3] = 1 (k=1)
For y3, j=3, i=2, we have
T1[2,3] = max_{k=1,2}(0.084*0.3*0.6, 0.027*0.6*0.6)=
T2[2,3] = 1 (k=1)
x_3 = fever
x_2 = health
x_1 = health
Question 6. (This question is optional)
For a given Hidden Markov Model, design a python Viterbi function. You can test your function by using Fig 3 and the above definition. For example, the following are the sample outputs when call it
viterbi(obs, states,start_pi,trans_A, emit_B)
123 Healthy: 0.30000 0.08400 0.00588
Fever: 0.04000 0.02700 0.01512
The steps of states are Healthy Healthy Fever with highest probability of 0.01512

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com