An example of Gaussian quadrature rule by using two approaches

Here is an example of using Gaussian quadrature rule through two approaches:

EITHER

by applying it on the original integrand by updating the argument of the integrand

OR

by applying it to the equivalent integrand because of the need to change the limits of integration to: -1 to 1.

http://nm.MathForCollege.com/blog/3pointquadruleexample.pdf

___________________

This post is brought to you by

Open course ware for Matrix Algebra Released

The open course ware for “Introduction to Matrix Algebra” has been released.   The topics include

 

  • Chapter 1: Introduction
  • Chapter 2: Vectors
  • Chapter 3: Binary Matrix Operations
  • Chapter 4: Unary Matrix Operations
  • Chapter 5: System of Equations
  • Chapter 6: Gaussian Elimination Method
  • Chapter 7: LU Decomposition
  • Chapter 8: Gauss-Seidel Method
  • Chapter 9: Adequacy of Solutions
  • Chapter 10: Eigenvalues and Eigenvector

For more details go to http://tap.usf.edu/stories/open-courseware-released-for-introduction-to-matrix-algebra/

___________________________________________

This post is brought to you by

Friday October 31, 2014, 11:59PM EDT, November 1, 2014 3:59AM GMT – Release Date for an Opencourseware in Introduction to Matrix Algebra

In a true Netflix style, on Halloween night, Friday October 31, 2014 at 11:59PM EST, we are releasing all resources simultaneously for an open courseware on Introduction to Matrix Algebra athttp://mathforcollege.com/ma/.  The courseware will include

  • 150 YouTube video lectures of total length of approximately 14 hours,
  • 10 textbook chapters,
  • 10 online multiple-choice quizzes with complete solutions,
  • 10 problem sets, and
  • PowerPoint presentations.

So set your calendar for October 31 for some matrix algebra binging rather than candy binging.  For more info and questions, contact Autar Kaw.

Chapter 1: Introduction 

Chapter 2: Vectors

Chapter 3: Binary Matrix Operations

Chapter 4: Unary Matrix Operations

Chapter 5: System of Equations

Chapter 6: Gaussian Elimination Method  

Chapter 7: LU Decomposition

Chapter 8: Gauss-Seidel Method

Chapter 9: Adequacy of Solutions

Chapter 10: Eigenvalues and Eigenvectors

________________________
This post is brought to you by

 

Machine epsilon – Question 5 of 5

In the previous blog posts, we answered

Here we answer the last question.

Repeated roots in ordinary differential equation – next independent solution – where does that come from?

When solving a fixed-constant linear ordinary differential equation where the characteristic equation has repeated roots, why do we get the next independent solution in the form of x^n*e^(m*x)?  Show this through an example.

See this pdf file for the answer.

________________________
This post is brought to you by

Machine Epsilon – Question 4 of 5

In the previous blog posts, we answered

Here we answer the next question.

Future post will answer this last question
Question 5 of 5: What is the proof that the absolute relative true error in representing a number on a machine is always less than the machine epsilon?

_________________

This post is brought to you by

Machine epsilon – Question 3 of 5

In the previous blog posts, we answered

Here we answer the next question.
Future posts will answer these questions
Question 4 of 5: What is the significance of machine epsilon for a student in an introductory course in numerical methods?

Question 5 of 5: What is the proof that the absolute relative true error in representing a number on a machine is always less than the machine epsilon?
________________________

_________________

This post is brought to you by

Machine epsilon – Question 2 of 5

In the previous blog post, we answered

Here we answer the next question.

Future posts will answer these questions
Question 3 of 5: How is machine epsilon related to the number of bits used to represent a floating point number?
Question 4 of 5: What is the significance of machine epsilon for a student in an introductory course in numerical methods?
Question 5 of 5: What is the proof that the absolute relative true error in representing a number on a machine is always less than the machine epsilon?

_________________

This post is brought to you by

Machine epsilon – Question 1 of 5

Future posts will answer these questions

Question  2 of 5: How do I find the machine epsilon using a MATLAB code?

Question 3 of 5: How is machine epsilon related to the number of bits used to represent a floating-point number?

Question 4 of 5: What is the significance of machine epsilon for a student in an introductory course in numerical methods?

Question 5 of 5: What is the proof that the absolute relative true error in representing a number on a machine is always less than the machine epsilon?

_________________

This post is brought to you by

A Facebook Page for Numerical Methods

We have started a Facebook page for numerical methods. I welcome you to join the group and spread the word about it.  Ask a question and stay updated with new resources.

https://www.facebook.com/numericalmethods

The Facebook page would be a place to keep the social media conversation going on – the one that has been going on via YouTube comments, twitter and this blog.

_______________________________

This post is brought to you by