Shawn Zhong

Shawn Zhong

钟万祥
  • Tutorials
  • Mathematics
    • Math 240
    • Math 375
    • Math 431
    • Math 514
    • Math 521
    • Math 541
    • Math 632
    • Abstract Algebra
    • Linear Algebra
    • Category Theory
  • Computer Sciences
    • CS/ECE 252
    • CS/ECE 352
    • Learn Haskell
  • AP Notes
    • AP Microecon
    • AP Macroecon
    • AP Statistics
    • AP Chemistry
    • AP Physics E&M
    • AP Physics Mech
    • CLEP Psycho

Shawn Zhong

钟万祥
  • Tutorials
  • Mathematics
    • Math 240
    • Math 375
    • Math 431
    • Math 514
    • Math 521
    • Math 541
    • Math 632
    • Abstract Algebra
    • Linear Algebra
    • Category Theory
  • Computer Sciences
    • CS/ECE 252
    • CS/ECE 352
    • Learn Haskell
  • AP Notes
    • AP Microecon
    • AP Macroecon
    • AP Statistics
    • AP Chemistry
    • AP Physics E&M
    • AP Physics Mech
    • CLEP Psycho

Mathematics

Home / Notes / Mathematics / Page 26

Math 521 – 2/28

  • Mar 01, 2018
  • Shawn
  • Math 521
  • No comments yet
Convergence and Divergence • Definition ○ A sequence {p_n } in a metric space X converges to a point p∈X if ○ Given any ε 0, ∃N∈N s.t. d(p,p_n ) ε, ∀n≥N ○ If {p_n } converges to p, we write § p_n→p § lim_(n→∞)⁡〖p_n 〗=p § lim⁡〖p_n 〗=p ○ If {p_n } does not converge, it is said to diverge • Intuition ○ ε is small ○ N is a "point of no return" beyond which sequence is within ε of p Range and Bounded • Range ○ Given a sequence {p_n } ○ The set of points p_n (n∈N) is called the range of the sequence ○ Range could be infinite, but it is always at most countable ○ Since we can always construct a function f:N→{p_n }, where f(n)=p_n • Bounded ○ A sequence {p_n } is said to be bounded if its range is bounded Examples • Consider the following sequences of complex numbers {s_n } Limit Range Bounded s_n=1/n 0 Infinite Yes s_n=n^2 Divergent Infinite No s_n=1+(−1)^n/n 1 Infinite Yes s_n=i^n Divergent {±1,±i} Yes s_n=1 1 {1} Yes • Proof:lim_(n→∞)⁡〖1/n〗=0 ○ Let ε 0 ○ By Archimedean Property, we can choose N∈N s.t. N 1/ε ○ ∀n≥N, n 1/ε⇒1/n ε ○ i.e. d(1/n,0)=|1/n| ε,∀n≥N ○ Therefore lim_(n→∞)⁡〖1/n〗=0
Read More >>

Math 541 – 2/28

  • Mar 01, 2018
  • Shawn
  • Math 541
  • No comments yet
Coset • If G is a group, H≤G, and g∈G • Define gH≔{ghhH} • Similarly, define Hg≔{h�│hH} • gH is caleld a left coset, and Hg is called a right coset • An element of a coset is called a representative of the coset Proposition 22: Properties of Coset • Let G be a group and H≤G, then • For g_1,g_2∈G,g_1 H=g_2 H⟺g_2^(−1) g_1∈H ○ (⟹) Choose h∈H s.t. g_1=g_2 h (since g_1=g_1⋅1∈g_1 H=g_2 H) ○ Therefore g_2^(−1) g_1=h∈H ○ (⟸) Choose h∈H s.t. g_1=g_2 h ○ ∀h′∈H, g_1 h′=g_2 ⏟(h^′ )┬(∈H)∈g_2 H⇒g_1 H⊆g_2 H ○ ∀h′∈H,g_2 h′=g_1 ⏟(h(−1) h′ )┬(∈H)∈g_1 H⇒g_2 H⊆g_1 H ○ Therefore g_1 H=g_2 H • The relation ~ on G given by g_1~g_2 iff g_1∈g_2 H is a equivlance relation ○ Reflexive § If g∈G, then g=g⋅1∈gH § So g~g ○ Symmetric § If g_1,g_2∈G, and g_1~g_2 i.e. g_1∈g_2 H, then § g_1=g_2 h for some h∈H § Thus g_1 h(−1)=g_2 § So g_2∈g_1 H, which means g_2~g_1 ○ Transitive § Suppose g_1~g_2 and g_2~g_3 § This means g_1∈g_2 H and g_2∈g_3 H § Choose h1,h2∈H s.t. g_1=g_2 h, and g_2=g_3 h § Then g_1=g_3 h2 h1∈g_3 H § So g_1~g_2 • In particular, left/right cosets are either equal or disjoint ○ Suppose g_1,g_2∈G, and z∈g_1 H∩g_2 H ○ Suppose x∈g_1 H, then x~g_1~z~g_2 ○ So x∈g_2 H ○ This implies that g_1 H⊆g_2 H ○ To get g_2 H⊆g_1 H, exchange the roles of g_1 and g_2 ○ Therefore g_1 H=g_2 H • Example 1 ○ Let G be a group, H≤G. If h∈H, then hH=H ○ Let h′∈H, then h′=h(h(−1) h′ )∈hH ○ Thus H⊆hH ○ By closure under the operation, hH⊆H ○ Therefore hH=H • Example 2 ○ Let G=Z\/6Z, and H= unique subgroup of Z\/6Z of order 2 ○ H={0 ̅,3 ̅ }≤Z\/6Z ○ Left cosets of H in G § 0 ̅+{0 ̅,3 ̅ }={0 ̅,3 ̅ } § 1 ̅+{0 ̅,3 ̅ }={1 ̅,4 ̅ } § 2 ̅+{0 ̅,3 ̅ }={2 ̅,5 ̅ } § 3 ̅+{0 ̅,3 ̅ }={0 ̅,3 ̅ } § 4 ̅+{0 ̅,3 ̅ }={1 ̅,4 ̅ } § 5 ̅+{0 ̅,3 ̅ }={2 ̅,5 ̅ } ○ Note § |G|=6, |H|=2, and H has 3 distinct cosets (2⋅3=6) § If G is a finite group, and H≤G, then ├ |H|┤|├ |G|┤, and § H has |G|/|H| distinct left (or right) cosets in G Normal Subgroup • Definition ○ Let G be a group, let N≤G ○ N is a normal subgroup if gng^(−1)∈N, ∀n∈N,∀g∈G ○ In other words, N is closed under conjugation ○ If N≤G is normal, we write N⊴G • Example 1 ○ If G is abelian, every subgroup of G is normal ○ Suppose H≤G ○ Let h∈H and g∈G ○ Thus ghg^(−1)=hgg^(−1)=h∈H • Example 2 ○ Let G=S_3, H=⟨(1 2)⟩ ○ Suppose g=(1 2 3), h=(1 2) ○ ghg^(−1)=(1 2 3)(1 2) (1 2 3)^(−1)=(1 2 3)(1 2)(1 3 2)=(2 3)∉H ○ Therefore H⋬G • Example 3 ○ ⟨(1 2 3)⟩ in S_3 is normal • Example 4 ○ Let G=GL_n (R ○ Let P,A∈G, then PAP^(−1) is change of basis matrix ○ Note: In GL_n (R, conjugation amounts to changing basis Proposition 23: Properties of Normal Subgroup • Statement ○ Let N be a subgroup of a group G ○ N⊴G iff gN=Ng,∀g∈G • Proof (⟹) ○ Suppose N⊴G ○ Let g∈G, n∈N ○ gn=gn(g^(−1) g)=⏟(gng^(−1) )┬(∈N) g∈Ng⇒gN⊆Ng ○ ng=(gg^(−1) )ng=g ⏟(g^(−1) ng)┬(∈N)∈gN⇒Ng⊆gN ○ Therefore gN=Ng • Proof (⟸) ○ Suppose gN=Ng,∀g∈G ○ Let g∈G,n∈N ○ We must show gng^(−1)∈N ○ Choose n^′∈N s.t. gn=n^′ g ○ Then gng^(−1)=n^′∈N ○ Therefore N⊴G • Example ○ Let f:G→H be a homomorphism, then ker⁡f⊴G ○ ker⁡f is a subgroup of G § ker⁡f≠∅, since f(1_G )=1_H § If k_1,k_2∈ker⁡f § f(k_1 k_2^(−1) )=f(k_1 )f(k_2 )^(−1)=1_H § Thus k_1 k_2^(−1)∈ker⁡f § Therefore ker⁡f≤G ○ ker⁡f is normal § Let g∈G,k∈ker⁡f § f(gkg^(−1) )=f(g)f(k)f(g)^(−1)=f(g)f(g)^(−1)=1_H § ⇒gkg^(−1)∈ker⁡f∎
Read More >>

3.3 Complexity of Algorithms

  • Feb 28, 2018
  • Shawn
  • Math 240
  • No comments yet
The Complexity of Algorithms • Given an algorithm, how efficient is this algorithm for solving a problem given input of a particular size? • To answer this question, we ask: ○ How much time does this algorithm use to solve a problem? ○ How much computer memory does this algorithm use to solve a problem? • When we analyze the time the algorithm uses to solve the problem given input of a particular size, we are studying the time complexity of the algorithm. • When we analyze the computer memory the algorithm uses to solve the problem given input of a particular size, we are studying the space complexity of the algorithm. • In this course, we focus on time complexity. • The space complexity of algorithms is studied in later courses. • We will measure time complexity in terms of the number of operations an algorithm uses and we will use big-O and big-Theta notation to estimate the time complexity. • We can use this analysis to see whether it is practical to use this algorithm to solve problems with input of a particular size. • We can also compare the efficiency of different algorithms for solving the same problem. • We ignore implementation details because it is extremely complicated to consider them. Time Complexity • To analyze the time complexity of algorithms, we determine the number of operations, such as comparisons and arithmetic operations (addition, multiplication, etc.). • We can estimate the time a computer may actually use to solve a problem using the amount of time required to do basic operations. • We ignore minor details, such as the “house keeping” aspects of the algorithm. • We will focus on the worst-case time complexity of an algorithm. • This provides an upper bound on the number of operations an algorithm uses to solve a problem with input of a particular size. • It is usually much more difficult to determine the average case time complexity of an algorithm. • This is the average number of operations an algorithm uses to solve a problem over all inputs of a particular size. Complexity Analysis of Algorithms • Example: Describe the time complexity of the algorithm for finding the maximum element in a finite sequence. ○ Count the number of comparisons. ○ The max a_i comparison is made n − 1 times. ○ Each time i is incremented, a test is made to see if i≤n. ○ One last comparison determines that i>n. ○ Exactly 2(n−1)+1=2n−1 comparisons are made. ○ Hence, the time complexity of the algorithm is Θ(n). • Example: Determine the time complexity of the linear search algorithm. ○ Count the number of comparisons. ○ At each step two comparisons are made; i≤n and x≠a_i . ○ To end the loop, one comparison i≤n is made. ○ After the loop, one more i≤n comparison is made. ○ If x=a_i, 2i+1 comparisons are used. ○ If x is not on the list, 2n+1 comparisons are made and then an additional comparison is used to exit the loop. ○ So, in the worst case 2n+2 comparisons are made. ○ Hence, the complexity is Θ(n). • Example: Describe the average case performance of the linear search algorithm. ○ Assume the element is in the list and that the possible positions are equally likely. ○ By the argument on the previous slide, if x=a_i , the number of comparisons is 2i+1 ○ (3+5+7+…+(2n+1))/n=(2(1+2+3+…+n)+n)/n=2[n(n+1)/2]/n+1=n+2 ○ Hence, the average-case complexity of linear search is Θ(n). • Example: Describe the time complexity of binary search in terms of the number of comparisons used. ○ Assume (for simplicity) n = 2^k elements. Note that k=log⁡n. ○ Two comparisons are made at each stage; i<j, and x>a_m. ○ At the first iteration the size of the list is 2k and after the first iteration it is 2k−1. ○ Then 2k−2 and so on until the size of the list is 2^1=2. ○ At the last step, a comparison tells us that the size of the list is the size is 2^0=1 and the element is compared with the single remaining element. ○ Hence, at most 2k+2=2 log⁡n+2 comparisons are made. ○ Therefore, the time complexity is Θ (log⁡n), better than linear search. • Example: What is the worst-case complexity of bubble sort in terms of the number of comparisons made? ○ A sequence of n−1 passes is made through the list. ○ On each pass n−i comparisons are made. ○ (n−1)+(n−2)+…+2+1=n(n−1)/2=1/2 n^2−1/2 n ○ The worst-case complexity of bubble sort is Θ(n^2) • Example: What is the worst-case complexity of insertion sort in terms of the number of comparisons made? ○ The total number of comparisons are ○ 2+3+…+n=n(n−1)/2−1 ○ Therefore the complexity is Θ(n^2) • Example: How many additions of integers and multiplications of integers are used by the matrix multiplication algorithm to multiply two n×n matrices. ○ There are n^2 entries in the product. ○ Finding each entry requires n multiplications and n−1 additions. ○ Hence, n^3 multiplications and n^2 (n−1) additions are used. ○ Hence, the complexity of matrix multiplication is O(n^3). Understanding the Complexity of Algorithms Complexity of Problems • Tractable Problem ○ There exists a polynomial time algorithm to solve this problem. ○ These problems are said to belong to the Class P. • Intractable Problem ○ There does not exist a polynomial time algorithm to solve this problem • Unsolvable Problem ○ No algorithm exists to solve this problem, e.g., halting problem. • Class NP ○ Solution can be checked in polynomial time. ○ But no polynomial time algorithm has been found for finding a solution to problems in this class. • NP Complete Class ○ If you find a polynomial time algorithm for one member of the class, ○ it can be used to solve all the problems in the class. P Versus NP Problem • The P versus NP problem asks whether the class P = NP? • Are there problems whose solutions can be checked in polynomial time, but can not be solved in polynomial time? • Note that just because no one has found a polynomial time algorithm is different from showing that the problem can not be solved by a polynomial time algorithm. • If a polynomial time algorithm for any of the problems in the NP complete class were found, then that algorithm could be used to obtain a polynomial time algorithm for every problem in the NP complete class. • Satisfiability (in Section 1.3) is an NP complete problem. • It is generally believed that P≠NP since no one has been able to find a polynomial time algorithm for any of the problems in the NP complete class. • The problem of P versus NP remains one of the most famous unsolved problems in mathematics (including theoretical computer science). • The Clay Mathematics Institute has offered a prize of $1,000,000 for a solution.
Read More >>

3.2 The Growth of Functions

  • Feb 27, 2018
  • Shawn
  • Math 240
  • No comments yet
The Growth of Functions • In both computer science and in mathematics, there are many times when we care about how fast a function grows. • In computer science, we want to understand how quickly an algorithm can solve a problem as the size of the input grows. ○ We can compare the efficiency of two different algorithms for solving the same problem. ○ We can also determine whether it is practical to use a particular algorithm as the input grows. ○ We’ll study these questions in Section 3.3. • Two of the areas of mathematics where questions about the growth of functions are studied are: ○ number theory (covered in Chapter 4) ○ combinatorics (covered in Chapters 6 and 8) Big-O Notation • Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. • We say that f(x) is O(g(x)) if there are constants C and k such that ○ |f(x)|≤C|g(x)| whenever xk. • This is read as “f(x) is big-O of g(x)” or “g asymptotically dominates f.” • The constants C and k are called witnesses to the relationship f(x) is O(g(x)). • Only one pair of witnesses is needed. Some Important Points about Big-O Notation • If one pair of witnesses is found, then there are infinitely many pairs. • We can always make the k or the C larger and still maintain the inequality |f(x)|≤C|g(x)| . • If f(x) is O(g(x)) and g(x) is O(hx)) then f(x) is O(hx)) • You may see “ f(x)=O(g(x))” instead of “ f(x) is O(g(x)).” • But this is an abuse of the equals sign since the meaning is that there is an inequality relating the values of f and g, for sufficiently large values of x. • It is ok to write f(x)∈O(g(x)), because O(g(x)) represents the set of functions that are O(g(x)). • Usually, we will drop the absolute value sign since we will always deal with functions that take on positive values. Using the Definition of Big-O Notation • Example: Show that f(x)=x^2+2x+1 is O(x^2 ). ○ Since when x1, xx^2 and 1x^2 ○ 0≤x^2+2x+1≤x^2+2x^2+x^2−4x^2 ○ Can take C = 4 and k = 1 as witnesses to show that f(x) is O(x^2 ) ○ Alternatively, when x2, we have 2x≤x^2 and 1x^2. ○ Hence, 0≤x^2+2x+1≤x^2+x^2+x^2=3x^2 when x2. ○ Can take C = 3 and k = 2 as witnesses instead. • Example: Show that 7x^2 is O(x^3) ○ When x7, 7x^2x^3. ○ Take C =1 and k = 7 as witnesses to establish that 7x^2 is O(x^3). ○ (Would C = 7 and k = 1 work?) Yes • Example: Show that n^2 is not O(n) ○ Suppose there are constants C and k for which n^2≤Cn, whenever nk. ○ Then n≤ C must hold for all nk. A contradiction! Big-O Estimates for Polynomials • Let f(x)=a_n x^n+a_(n−1) x^(n−1)+…+a_1 x+a_0 where a_0,a_1,…a_n are real numbers with a_n≠0 • Then f(x) is O(x_n). • |f(x)| = |a_n x^n + a_(n−1) x^(n−1)+…+a_1 x+a_0 | • ≤ |an|xn + |an−1| xn−1 + ∙∙∙ + |a1|x1 + |a0| • = x_n (|a_n |+|a_(n−1) |/x +…+|a1|/x_(n−1) +|a_0 |/x_n ) • ≤ x_n (|a_n |+ |a_(n−1) |+…+ |a_1 |+ |a_0 |) • Take C=|a_n |+|a_(n−1) |+…+ |a_0 | and k=1. • Then f(x) is O(x^n). • The leading term a_n x^n of a polynomial dominates its growth. Big-O Estimates for Some Important Functions • Example: Use big-O notation to estimate the sum of the first n positive integers. ○ 1+2+…+n≤n+n+…+n=n^2 ○ 1+2+…+n is O(n^2 ) taking C=1 and k=1 • Example: Use big-O notation to estimate the factorial function f(n)=n!=1×2×…×n ○ n!=1×2×…×n≤n×n×…×n^n ○ n! is O(n^n ) taking C=1 and k=1 • Example: Use big-O notation to estimate log⁡n! ○ Given that n!n^n ○ then log⁡n!≤n⋅log⁡n . ○ Hence, log⁡n! is O(n log⁡n ) taking C = 1 and k = 1. Display of Growth of Functions Combinations of Functions • If f_1 (x) is O(g_1 (x)) and f_2 (x) is O(g_2 (x)) then ○ (f_1+f_2)(x) is O(max⁡{|g_1 (x)|,|g_2 (x)|} ). • If f_1 (x) and f_2 (x) are both O(g(x)) then ○ (f_1+f_2)(x) is O(g(x)). • If f_1 (x) is O(g_1 (x)) and f_2 (x) is O(g_2 (x)) then ○ (f_1 f_2)(x) is O(g_1 (x) g_2 (x)). Big-Omega Notation • Let f and g be functions from the set of integers/real numbers to the set of real numbers. • We say that f(x) is Ω(g(x)) if there are constants C and k such that |f(x)|≥C|g(x)| when xk • We say that “f(x) is big-Omega of g(x).” • Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. • Big-Omega tells us that a function grows at least as fast as another. • f(x) is Ω(g(x)) if and only if g(x) is O(f(x)). • This follows from the definitions. • Example: Show f(x)=8x^3+5x^2+7 is Ω(g(x)) where g(x)=x^3 ○ f(x)=8x^3+5x^2+7≥8x^3 for all positive real numbers x ○ Is it also the case that g(x)=x^3 is O(8x^3+5x^2+7)? Yes Big-Theta Notation • Let f and g be functions from the set of integers/real numbers to the set of real numbers. • The function f(x) is Θ(g(x)) if f(x) is O(g(x)) and f(x) is Ω(g(x)) • We say that ○ “f is big-Theta of g(x)” ○ “f(x) is of order g(x)” ○ “f(x) and g(x) are of the same order.” • f(x) is Θ(g(x)) if and only if there exists constants C_1, C_2 and k such that C_1 g(x)f(x)C_2 g(x) if xk. • This follows from the definitions of big-O and big-Omega. • When f(x) is Θ(g(x)) it must also be the case that g(x) is Θ(f(x)) • Note that f(x) is Θ(g(x)) if and only if it is the case that f(x) is O(g(x)) and g(x) is O(f(x)). • Sometimes writers are careless and write as if big-O notation has the same meaning as big-Theta. Examples of Big-Theta Notation • Example 1: Show that the sum of the first n positive integers is Θ(n^2 ) ○ Let f(n) = 1 + 2 + ∙∙∙ + n. ○ We have already shown that f(n) is O(n^2). ○ To show that f(n) is Ω(n^2), we need a positive constant C such that f(n) Cn^2 for sufficiently large n. ○ Summing only the terms greater than n/2 we obtain the inequality 1 + 2 + ∙∙∙ + n ≥ ⌈n/2⌉+(⌈n/2⌉+1)+…+n ≥ ⌈n/2⌉+⌈n/2⌉+…+⌈n/2⌉ = (n−⌈n/2⌉+1)⌈n/2⌉ ≥ (n/2)(n/2) = n^2/4 ○ Taking C = 1/4, f(n) Cn^2 for all positive integers n. ○ Hence, f(n) is Ω(n^2), and we can conclude that f(n) is Θ(n^2). • Example 2: Show that f(x)=3x^2+8x log⁡x is Θ(x^2) ○ f(x)=3x^2+8x log⁡x≤11x^2 for x 1, ○ since 0 ≤8x log⁡x≤ 8x^2. ○ Hence, 3x^2+8x log⁡x is O(x^2). ○ x^2 is clearly O(3x^2+8x log⁡x) ○ Hence, 3x^2+8x log⁡x is Θ(x^2)
Read More >>

Math 521 – 2/26

  • Feb 27, 2018
  • Shawn
  • Math 521
  • No comments yet
Theorem 2.24 (a) For any collection {G_n } of open sets, ⋃8_α▒G_α is open ○ Suppose G_α is open for all α ○ Let G=⋃8_α▒G_α ○ If x∈G, then x∈G_α for some α ○ Since G_α is open, there is a neighborhood about x in G_α ○ And consequently, the neighborhood about x is also in G ○ Thus G is open (b) For any collection {F_n } of closed sets, ⋂8_α▒F_α is closed ○ Suppose F_α is closed for all α ○ Then F_α^c is open by Theorem 2.23 ○ So ⋃8_α▒F_α^c is open by (a) ○ (⋂8_α▒F_α )^c=⋃8_α▒F_α^c , by De Morgan^′ s Law ○ Thus, (⋂8_α▒F_α )^c is open ○ Therefore ⋂8_α▒F_α is closed by Theorem 2.23 (c) For any finite collection, G_1,G_2,…,G_n of open sets, ⋂8_(i=1)^n▒G_i is also open ○ Suppose G_1,G_2,…,G_n is open ○ Let x∈H=⋂8_(i=1)^n▒G_i ○ So, x∈G_i for 1≤i≤n ○ By definition, since each G_i is open ○ x is contained in a neighborhood N_(r_i ) (x)⊂G_i ○ Let r=min⁡(r_1,r_2,…,r_n ) ○ N_r (x)⊂G_i for 1≤i≤n ○ So, N_r (x)∈H ○ Thus, H=⋂8_(i=1)^n▒G_i is open (d) For any finite collection, F_1,F_2,…,F_n of closed sets, ⋃24_(i=1)^n▒F_i is also closed ○ Suppose F_1,F_2,…,F_n is closed ○ Then F_i^c is open by Theorem 2.23 ○ So ⋂24_(i=1)^n▒F_i^c is open by (c) ○ (⋃24_(i=1)^n▒F_i )^c=⋂24_(i=1)^n▒F_i^c , by De Morgan^′ s Law ○ Thus, (⋃24_(i=1)^n▒F_i )^c is open ○ Therefore ⋃24_(i=1)^n▒F_i is closed by Theorem 2.23 • Note ○ ⋂24_(n=1)^∞▒(−1/n,1/n) ={0} ○ (−1/n,1/n) is open ∀n∈N, while {0} is closed Closure • Let X be a metric space • If E⊂X and E′ denotes the set of limit points of E in X • Then the closure of E is defined to be E ̅=E∪E^′ Theorem 2.27 • If X is a metric space and E⊂X, then • E ̅ is closed ○ Let p∈E ̅^c ○ Then p is neither a point of E nor a limit point of E ○ So there exists a neighborhood N about p that contains no points of E ○ So,N⊂E ̅^c ○ i.e. every point of E ̅^c is an interior point ○ Thus E ̅^c is open ○ Therefore E ̅ is closed • E=E ̅ iff E is closed ○ If E=E ̅, then E is closed ○ If E is closed, E contains its limit points, so E^′⊂E and E=E ̅ • E ̅⊂F for every closed set F⊂X s.t. E⊂F ○ Suppose F is closed and E⊂F ○ F is closed⇒F^′⊂F ○ E⊂F⇒E^′⊂F′⊂F ○ Thus E ̅=E∪E^′⊂F • Intuition: E ̅ is the smallest closed set in X containing E Theorem 2.28 • Statement ○ If E≠∅, E⊂R, and E is bouned above, then sup⁡E∈E ̅ ○ Hence sup⁡E∈E if E is closed • Proof ○ Let y=sup⁡E ○ If y∈E § Clearly y∈E ̅ ○ If y∉E § Let h 0 § Let x∈(y−hy) § Suppose ∄x∈E, then y−h is an upper bound for E § But this contradicts the fact that y=sup⁡E § So there must be some x∈E with y−h x y § Thus, for any neighborhood about y, ∃x∈E in the neighborhood § So y is a limit point of E § i.e. y∈E^′⊂E ̅
Read More >>
  • 1
  • …
  • 24
  • 25
  • 26
  • 27
  • 28
  • …
  • 60

Search

  • Home Page
  • Tutorials
  • Mathematics
    • Math 240 – Discrete Math
    • Math 375 – Linear Algebra
    • Math 431 – Intro to Probability
    • Math 514 – Numerical Analysis
    • Math 521 – Analysis I
    • Math 541 – Abstract Algebra
    • Math 632 – Stochastic Processes
    • Abstract Algebra @ 万门大学
    • Linear Algebra @ 万门大学
    • Category Theory
  • Computer Sciences
    • CS/ECE 252 – Intro to Computer Engr.
    • CS/ECE 352 – Digital System Fund.
    • Learn Haskell
  • Course Notes
    • AP Macroeconomics
    • AP Microeconomics
    • AP Chemistry
    • AP Statistics
    • AP Physics C: E&M
    • AP Physics C: Mechanics
    • CLEP Psychology
  • 2048 Game
  • HiMCM 2016
  • 登峰杯 MCM

WeChat Account

Categories

  • Notes (418)
    • AP (115)
      • AP Macroeconomics (20)
      • AP Microeconomics (23)
      • AP Physics C E&M (25)
      • AP Physics C Mechanics (28)
      • AP Statistics (19)
    • Computer Sciences (2)
    • Mathematics (300)
      • Abstract Algebra (29)
      • Category Theory (7)
      • Linear Algebra (29)
      • Math 240 (42)
      • Math 375 (71)
      • Math 514 (18)
      • Math 521 (39)
      • Math 541 (39)
      • Math 632 (26)
  • Projects (2)
  • Tutorials (11)

Archives

  • October 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • July 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017

WeChat Account

Links

RobeZH's thoughts on Algorithms - Ziyi Zhang
Copyright © 2018.      
TOP