Welcome back hockey fans. Friday brings us a five-game slate and I take a look at some of my favorite plays for DraftKings and FanDuel in my daily picks video. Let's get started.

Even numbers are multiples of 2, and more generally, multiples arise throughout mathematics and everyday life. The mass of a stack of bricks is a multiple of the mass of one brick. The number of pages in a packet of notebooks is a multiple of the number of pages in one notebook.

The factors of a number can be displayed using rectangular arrays. Some numbers, such as 30, can arise in many different ways as a product,. This idea leads to the classification of numbers greater than 1 as either prime or composite , and to a listing of all the factors of a number. There are several groups of well-known divisibility tests that can check whether a number is a factor without actually performing the division.

These tests greatly simplify the listing of factors of numbers. Repeated addition leads to multiplication. Repeated multiplication in turn leads to powers , and manipulating powers in turn relies on five index laws. Powers are introduced in this module, together with four of the five index laws.

We are used to comparing numbers in terms of their size. The highest common factor HCF and lowest common multiple LCM allow us to compare numbers in terms of their factors and multiples. For example, when we look at 30 and 12, we see that they are both multiples of 6, and that 6 is the greatest factor common to both numbers.

We also see that 60 is a multiple of both numbers, and that 60 is the lowest common multiple of them apart from 0. The array representing the even number 10 has the dots divided evenly into two equal rows of 5, but the array representing the odd number 11 has an extra odd dot left over. Indeed, our concept of the number 2 is so different from our conceptions of all other numbers that we even use different language. We divide a pie between two people, but among three people.

We identify two alternatives , but three options. There are several obvious facts about calculations with odd and even numbers that are very useful as an automatic check of calculations. First, addition and subtraction:.

Proofs by arrays usually convince students more than algebraic proofs. The other cases are very similar. Draw four diagrams to illustrate the four cases of subtraction of odd and even numbers. Proofs by arrays can be used here, but they are unwieldy. Instead, we will use the previous results for adding odd and even numbers. Here are examples of the three cases:. The first and second products are even because each can be written as the sum of even numbers:.

The third product can be written as the sum of pairs of odd numbers, plus an extra odd number:. Each bracket is even, because it is the sum of two odd numbers, so the whole sum is odd. What can we say about the quotients of odd and even numbers?

Assuming in each case that the division has no remainder, complete each sentence below, if possible. Justify your answers by examples. The previous results on the arithmetic of odd and even numbers can be obtained later after pronumerals have been introduced, and expansions of brackets and taking out a common factor dealt with.

The important first step is:. Obtain the previous results on the addition and multiplication of odd and even numbers by algebra. In the previous section, we represented even numbers by arrays with two equal rows, and odd numbers by arrays with two rows in which one row has one more dot than the other. Representing numbers by arrays is an excellent way to illustrate some of their properties. For example, the arrays below illustrate significant properties of the numbers 10, 9, 8 and 7.

The convention used in these modules is that the first factor represents the number of rows, and the second factor represents the number of columns. The opposite convention, however, would be equally acceptable. The first array shows that 9 is a square because it can be represented as a square array. Because there is no 2-row array, the number 9 is odd. The second array is the trivial array.

The third array shows that 8 is a cube because it can be represented as a cubic array. The first array shows that 8 is even, and the second array is the trivial array. The interesting thing about the number 7 is that it only has the trivial array, because 7 dots cannot be arranged in any rectangular array apart from a trivial array.

Numbers greater than 1 with only a trivial rectangular array are called prime numbers. All other whole numbers greater than 1 are called composite numbers. We shall discuss prime numbers in a great deal more detail in the later module, Primes and Prime Factorisation.

The usual definition of a prime number expresses exactly the same thing in terms of factors:. Here are the only possible rectangular arrays for the first four prime numbers:. Rectangular arrays are not the only way that numbers can usefully be represented by patterns of dots. In the diagram to the right, two copies of the fourth triangular number have been fitted together to make a rectangle. Explain how to calculate from this diagram that the 4th triangular number is Hence calculate the th triangular number.

Multiples, common multiples and the LCM. Because 6 is an even number, all its multiples are even. The multiples of an odd number such as 7, however, alternate even, odd, even, odd, even,… because we are adding the odd number 7 at each step. The number zero is a multiple of every number.

The multiples of zero are all zero. Every other whole number has infinitely many multiples. We can illustrate the multiples of a number using arrays with three columns and an increasing numbers of rows. Here are the first few multiples of Rows and columns can be exchanged. Thus the multiples of 3 could also be illustrated using arrays with three rows and an increasing numbers of columns.

The repeating pattern of common multiples is a great help in understanding division. Here again are the multiples of 6,. If we divide any of these multiples by 6, we get a quotient with remainder zero. For example,. To divide any other number such as 29 by 6, we first locate 29 between two multiples of 6. Thus we locate 29 between 24 and Notice that the remainder is always a whole number less than 6, because the multiples of 6 step up by 6 each time.

Hence with division by 6 there are only 6 possible remainders,. This result takes a very simple form when we divide by 2, because the only possible remainders are 0 and 1. In later years, when students have become far more confident with algebra, these remarks about division can be written down very precisely in what is called the division algorithm. The table above shows all the whole numbers written out systematically in 7 columns. Suppose that each number in the table is divided by 7 to produced a quotient and a remainder.

When 22 and 41 are divided by 6, their remainders are 4 and 5 respectively. An important way to compare two numbers is to compare their lists of multiples. Let us write out the first few multiples of 4, and the first few multiples of 6, and compare the two lists. The numbers that occur on both lists have been circled, and are called common multiples. Apart from zero, which is a common multiple of any two numbers, the lowest common multiple of 4 and 6 is These same procedures can be done with any set of two or more non-zero whole numbers.

Two or more nonzero numbers always have a common multiple — just multiply the numbers together. But the product of the numbers is not necessarily their lowest common multiple. You will have noticed that the list of common multiples of 4 and 6 is actually a list of multiples of their LCM Similarly, the list of common multiples of 12 and 16 is a list of the multiples of their LCM This is a general result, which in Year 7 is best demonstrated by examples.

In an exercise at the end of the module, Primes and Prime Factorisation , however, we have indicated how to prove the result using prime factorisation. Factors, common factors and the HCF. The number zero is a multiple of every number, so every number is a factor of zero. On the other hand, zero is the only multiple of zero, so zero is a factor of no numbers except zero. These rather odd remarks are better left unsaid, unless students insist.

They should certainly not become a distraction from the nonzero whole numbers that we want to discuss. The product of two nonzero whole numbers is always greater than or equal to each factor in the product. Hence the factors of a nonzero number like 12 are all less than or equal to Thus whereas a positive whole number has infinitely many multiples, it has only finitely many factors. The long way to find all the factors of 12 is to test systematically all the whole numbers less than 12 to see whether or not they go into 12 without remainder.

The list of factors of 12 is. It is very easy to overlook factors by this method, however. A far more efficient way, is to look for pairs of factors whose product is Begin by testing all the whole numbers 1, 2, … that could be the smaller of a pair of factors with product 12,. We can display these pairs of factors by writing the 12 dots in all possible rectangular arrays:.

For a larger number such as 60, the method recommended here has the following steps:. Another important way to compare two numbers is to compare their lists of factors. Let us write out the lists of factors of 18 and 30, and compare the lists. The numbers that occur on both lists have been circled, and are called the common factors. As with common multiples, these procedures can be done with any list of two or more whole numbers.

Any collection of whole numbers always has 1 as a common factor. The question is whether the numbers have common factors greater than 1. Every whole number is a factor of 0, so the common factors of 0 and say 12 are just the factors of 12, and the HCF of 0 and 12 is A nonzero whole number has only a finite number of factors, so it has a greatest factor.

Two or more numbers always have a HCF because at least one of them is nonzero. These are distractions from the main ideas. You will have noticed that the list of common factors of 18 and 30 is actually a list of factors of their HCF 6. Similarly, the list of common factors of 30 and 75 is a list of the factors of their HCF Again, this is a general result, which in Year 7 is best demonstrated by examples.

An exercise at the end of the module Primes and Prime Factorisation indicates how to prove the result using prime factorisation. The two relationships below between the HCF and the LCM are again best illustrated by examples in Year 7, but an exercise in the module Primes and Prime Factorisation indicates how they can be proven.

The first relationship is extremely useful, and is used routinely when working with common denominators of fractions. We will use this example to illustrate the difference in performance between loops and vectorized operations in python. In the last example, there may be loop buried in the sum command. Let us do one final method, using linear algebra, in a single line. The key to understanding this is to recognize the sum is just the result of a dot product of the x differences and y sums.

The loop method is straightforward to code, and looks alot like the formula that defines the trapezoid method. However, the vectorized methods are much faster than the loop, so the loss of readability could be worth it for very large problems. The times here are considerably slower than in Matlab. I am not sure if that is a totally fair comparison. Here I am running python through emacs, which may result in slower performance.

I also used a very crude way of timing the performance which lumps some system performance in too. Simpson's rule A more accurate numerical integration than the trapezoid method is Simpson's rule. The syntax is similar to trapz, but the method is in scipy. The syntax in dblquad is a bit more complicated than in Matlab.

We have to provide callable functions for the range of the y-variable. Here they are constants, so we create lambda functions that return the constants. Also, note that the order of arguments in the integrand is different than in Matlab.

The syntax differs significantly for these simple examples, but the use of functions for the limits enables freedom to integrate over non-constant limits. A common need in engineering calculations is to integrate an equation over some range to determine the total change. An alternative to the scipy.

This method is not likely to be more accurate than quad, and it does not give you an error estimate. Matlab post Python has capability to do symbolic math through the sympy package. The symbolic math in sympy is pretty good. It is not up to the capability of Maple or Mathematica, but neither is Matlab but it continues to be developed, and could be helpful in some situations.

Float numbers i. This can lead to some artifacts when you have to compare float numbers that on paper should be the same, but in silico are not. In this example, we do some simple math that should result in an answer of 1, and then see if the answer is "equal" to one. The first line shows the result is not 1. You can see here why the equality statement fails.

We will print the two numbers to sixteen decimal places. The two numbers actually are not equal to each other because of float math. They are very, very close to each other, but not the same. This leads to the idea of asking if two numbers are equal to each other within some tolerance. The question of what tolerance to use requires thought.

Should it be an absolute tolerance? How large should the tolerance be? We will use the distance between 1 and the nearest floating point number this is eps in Matlab. Below, we implement a comparison function from doi For completeness, here are the other float comparison operators from that paper. We also show a few examples. As you can see, float comparisons can be tricky. You have to give a lot of thought to how to make the comparisons, and the functions shown above are not the only way to do it.

You need to build in testing to make sure your comparisons are doing what you want. Numpy has some gotcha features for linear algebra purists. The first is that a 1d array is neither a row, nor a column vector. This would not be allowed in Matlab. Compare the previous behavior with this 2d array. You must transpose the second argument to make it dimensionally consistent. Try to figure this one out! Just by adding them you get a 2d array. In the next example, we have a 3 element vector and a 4 element vector.

These concepts are known in numpy as array broadcasting. These are points to keep in mind, as the operations do not strictly follow the conventions of linear algebra, and may be confusing at times. When solving linear equations, we can represent them in matrix form. The we simply use numpy. It can be useful to confirm there should be a solution, e. The matrix rank will tell us that. Note that numpy:rank does not give you the matrix rank, but rather the number of dimensions of the array.

We compute the rank by computing the number of singular values of the matrix that are greater than zero, within a prescribed tolerance. We use the numpy. In Matlab you would use the rref command to see if there are any rows that are all zero, but this command does not exist in numpy.

That command does not have practical use in numerical linear algebra and has not been implemented. Matlab comparison. Today we examine some methods of linear algebra that allow us to avoid writing explicit loops in Matlab for some kinds of mathematical operations. We can compute this with a loop, where you initialize y, and then add the product of the ith elements of a and b to y in each iteration of the loop. This is known to be slow for large vectors. The operation defined above is actually a dot product.

We an directly compute the dot product in numpy. Note that with 1d arrays, python knows what to do and does not require any transpose operations. This operation is like a weighted sum of squares. The old-fashioned way to do this is with a loop. We can also express this in matrix algebra form. Consider the sum of the product of three vectors.

This is like a weighted sum of products. We showed examples of the following equalities between traditional sum notations and linear algebra. These relationships enable one to write the sums as a single line of python code, which utilizes fast linear algebra subroutines, avoids the construction of slow loops, and reduces the opportunity for errors in the code. Admittedly, it introduces the opportunity for new types of errors, like using the wrong relationship, or linear algebra errors due to matrix size mismatches.

Matlab post Occasionally we have a set of vectors and we need to determine whether the vectors are linearly independent of each other. This may be necessary to determine if the vectors form a basis, or to determine how many independent equations there are, or to determine how many independent reactions there are.

Matlab provides a rank command which gives you the number of singular values greater than some tolerance. It returns the number of dimensions in the array. We will just compute the rank from singular value decomposition. Let us break that down. Basically, the smallest significant number. We multiply that by the size of A, and take the largest number.

We have to use some judgment in what the tolerance is, and what "zero" means. Let us show that one row can be expressed as a linear combination of the other rows. The number of rows is greater than the rank, so these vectors are not independent. Let's demonstrate that one vector can be defined as a linear combination of the other two vectors.

Mathematically we represent this as:. To get there, we transpose each side of the equation to get:. Matlab uses a tolerance to determine what is equal to zero. If there is uncertainty in the numbers, you may have to define what zero is, e. The default tolerance is usually very small, of order 1e If we believe that any number less than 1e-5 is practically equivalent to zero, we can use that information to compute the rank like this.

A stoichiometric coefficient of 0 is used for species not participating in the reaction. You can see that reaction 6 is just the opposite of reaction 2, so it is clearly not independent. Also, reactions 3 and 5 are just the reverse of each other, so one of them can also be eliminated. There are many possible independent reactions. In the code above, we use sympy to put the matrix into reduced row echelon form, which enables us to identify three independent reactions, and shows that three rows are all zero, i.

The choice of independent reactions is not unique. There is a nice discussion here on why there is not a rref command in numpy, primarily because one rarely actually needs it in linear algebra. Still, it is so often taught, and it helps visually see what the rank of a matrix is that I wanted to examine ways to get it. This rref form is a bit different than you might get from doing it by hand. The rows are also normalized. Let us check it out. There are "solutions", but there are a couple of red flags that should catch your eye.

First, the determinant is within machine precision of zero. Second the elements of the inverse are all "large". Third, the solutions are all "large". All of these are indications of or artifacts of numerical imprecision. LU decomposition,determinant There are a few properties of a matrix that can make it easy to compute determinants.

So we simply subtract the sum of the diagonal from the length of the diagonal and then subtract 1 to get the number of swaps. According to the numpy documentation, a method similar to this is used to compute the determinant. If the built in linear algebra functions in numpy and scipy do not meet your needs, it is often possible to directly call lapack functions.

Here we call a function to solve a set of complex linear equations. But, one day it might be helpful to know this can be done, e. Nonlinear algebra problems are typically solved using an iterative process that terminates when the solution is found within a specified tolerance. This process is hidden from the user. In Matlab, the default tolerance was not sufficient to get a good solution.

Here it is. Original post in Matlab. What is the exit molar flow rate? We need to solve the following equation:. We start by creating a function handle that describes the integrand. We can use this function in the quad command to evaluate the integral. This example seemed a little easier in Matlab, where the quad function seemed to get automatically vectorized. Here we had to do it by hand. In principle this is easy, we simply need some initial guesses and a nonlinear solver.

The challenge here is what would you guess? There could be many solutions. The equations are implicit, so it is not easy to graph them, but let us give it a shot, starting on the x range -5 to 5. The idea is set a value for x, and then solve for y in each equation.

We can even use that guess with fsolve. It is disappointly easy! But, keep in mind that in 3 or more dimensions, you cannot perform this visualization, and another method could be required. We explore a method that bypasses this problem today. Why do we do that? From calculus, you can show that:. You can use Cramer's rule to solve for these to yield:. The approximation could be improved by lowering the tolerance on the ODE solver.

The functions evaluate to a small number, close to zero. You have to apply some judgment to determine if that is sufficiently accurate. For instance if the units on that answer are kilometers, but you need an answer accurate to a millimeter, this may not be accurate enough. This is a fair amount of work to get a solution! The idea is to solve a simple problem, and then gradually turn on the hard part by the lambda parameter.

What happens if there are multiple solutions? For problems with lots of variables, this would be a good approach if you can identify the easy problem. Matlab post Yesterday in Post we looked at a way to solve nonlinear equations that takes away some of the burden of initial guess generation. Today we look at a simpler example and explain a little more about what is going on.

We will use the method of continuity to solve this equation to illustrate a few ideas. The total derivative is:. What about the other solution? Now we have the other solution. You could choose other values to add, e.

This method does not solve all problems associated with nonlinear root solving, namely, how many roots are there, and which one is "best" or physically reasonable? But it does give a way to solve an equation where you have no idea what an initial guess should be. You can see, however, that just like you can get different answers from different initial guesses, here you can get different answers by setting up the equations differently.

Matlab post The goal here is to determine how many roots there are in a nonlinear function we are interested in solving. For this example, we use a cubic polynomial because we know there are three roots. Now we consider several approaches to counting the number of roots in this interval.

Visually it is pretty easy, you just look for where the function crosses zero. Computationally, it is tricker. Count the number of times the sign changes in the interval. What we have to do is multiply neighboring elements together, and look for negative values. That indicates a sign change. For example the product of two positive or negative numbers is a positive number. You only get a negative number from the product of a positive and negative number, which means the sign changed.

Using events in an ODE solver python can identify events in the solution to an ODE, for example, when a function has a certain value, e. We can take advantage of this to find the roots and number of roots in this case. We take the derivative of our function, and integrate it from an initial starting point, and define an event function that counts zeros. We examine an approach to finding these roots.

This function is pretty well behaved, so if you make a good guess about the solution you will get an answer, but if you make a bad guess, you may get the wrong root. We examine next a way to do it without guessing the solution. All we have to do now is set up the problem and run it.

You can work this out once, and then you have all the roots in the interval and you can select the one you want. To solve this we need to setup a function that is equal to zero at the solution. We have two equations, so our function must return two values. There are two variables, so the argument to our function will be an array of values.

Interesting, we have to specify the divisor in numpy. The default for this in Matlab is 1, the default for this function is 0. You subtract 1 because one degree of freedom is lost from calculating the average. This is useful for computing confidence intervals using the student-t tables. Class A had 30 students who received an average test score of 78, with standard deviation of Class B had 25 students an average test score of 85, with a standard deviation of We want to know if the difference in these averages is statistically relevant.

Note that we only have estimates of the true average and standard deviation for each class, and there is uncertainty in those estimates. As a result, we are unsure if the averages are really different. It could have just been luck that a few students in class B did better. Here we simply subtract one from each sample size to account for the estimation of the average of each sample. The difference between two averages determined from small sample numbers follows the t-distribution.

A way to approach determining if the difference is significant or not is to ask, does our computed average fall within a confidence range of the hypothesized value zero? If it does, then we can attribute the difference to statistical variations at that confidence level. If it does not, we can say that statistical variations do not account for the difference at that confidence level, and hence the averages must be different.

Let us consider a smaller confidence interval. An alternative way to get the confidence that the averages are different is to directly compute it from the cumulative t-distribution function. We compute the difference between all the t-values less than tscore and the t-values less than -tscore, which is the fraction of measurements that are between them. In this example, we show some ways to choose which of several models fit data the best.

We have data for the total pressure and temperature of a fixed amount of a gas in a tank that was measured over the course of several days. We want to select a model that relates the pressure to the gas temperature. We need to read the data in, and perform a regression analysis on P vs. In python we start counting at 0, so we actually want columns 3 and 4. We will use linear algebra to compute the line coefficients.

Hence, a value close to one means nearly all the variations are described by the model, except for random variations. There are a few ways to examine this. We want to make sure that there are no systematic trends in the errors between the fit and the data, and we want to make sure there are not hidden correlations with other variables. The residuals are the error between the fit and the data.

The residuals should not show any patterns when plotted against any variables, and they do not in this case. There may be some correlations in the residuals with the run order. That could indicate an experimental source of error. We assume all the errors are uncorrelated with each other. We can use a lag plot to assess this, where we plot residual[i] vs residual[i-1], i.

This plot should look random, with no correlations if the model is good. That is a good indication this additional parameter is not significant. This is an example of overfitting the data. Since the constant in this model is apparently not significant, let us consider the simplest model with a fixed intercept of zero.

Let us examine the residuals again. You can see a slight trend of decreasing value of the residuals as the Temperature increases. This may indicate a deficiency in the model with no intercept. Since the molar density of a gas is pretty small, the intercept may be close to, but not equal to zero. That is why the fit still looks ok, but is not as good as letting the intercept be a fitting parameter. That is an example of the deficiency in our model.

Propagation of errors is essential to understanding how the uncertainty in a parameter affects computations that use that parameter. The uncertainty propagates by a set of rules into your solution. These rules are not easy to remember, or apply to complicated situations, and are only approximate for equations that are nonlinear in the parameters. We will use a Monte Carlo simulation to illustrate error propagation. The idea is to generate a distribution of possible parameter values, and to evaluate your equation for each parameter value.

Then, we perform statistical analysis on the results to determine the standard error of the results. We will assume all parameters are defined by a normal distribution with known mean and standard deviation.

You can numerically perform error propagation analysis if you know the underlying distribution of errors on the parameters in your equations. One benefit of the numerical propagation is you do not have to remember the error propagation rules, and you directly look at the distribution in nonlinear cases. Some limitations of this approach include. In the previous section we examined an analytical approach to error propagation, and a simulation based approach.

You have to install this package, e. After that, the module provides new classes of numbers and functions that incorporate uncertainty and propagate the uncertainty through the functions. In the examples that follow, we repeat the calculations from the previous section using the uncertainties module. Note in the last example, we had to either import a function from uncertainties. This may be a limitation of the uncertainties package as not all functions in arbitrary modules can be covered.

Note, however, that you can wrap a function to make it handle uncertainty like this. A real example? This is what I would setup for a real working example. We try to compute the exit concentration from a CSTR. The idea is to wrap the "external" fsolve function using the uncertainties. Unfortunately, it does not work, and it is not clear why. But see the following discussion for a fix. I got a note from the author of the uncertainties package explaining the cryptic error above, and a solution for it.

The error arises because fsolve does not know how to deal with uncertainties. The idea is to create a function that returns a float, when everything is given as a float. Then, we wrap the fsolve call, and finally wrap the wrapped fsolve call!

It would take some practice to get used to this, but the payoff is that you have an "automatic" error propagation method. Being ever the skeptic, let us compare the result above to the Monte Carlo approach to error estimation below. The uncertainties module is pretty amazing. It automatically propagates errors through a pretty broad range of computations.

It is a little tricky for third-party packages, but it seems doable. Random numbers are used in a variety of simulation methods, most notably Monte Carlo simulations. In another later example, we will see how we can use random numbers for error propagation analysis. First, we discuss two types of pseudorandom numbers we can use in python: uniformly distributed and normally distributed numbers. Let us ask Python to roll the random number generator for us.

The odds of you winning the last bet are slightly stacked in your favor. Lets play the game a lot of times times and see how many times you win, and your friend wins. First, lets generate a bunch of numbers and look at the distribution with a histogram. It is possible to get random integers. Here are a few examples of getting a random integer between 1 and You might do this to get random indices of a list, for example.

Let us compare the sampled distribution to the analytical distribution. We generate a large set of samples, and calculate the probability of getting each value using the matplotlib. In other words, a vector where both inequalities are true. Finally, we can sum the vector to get the number of elements where the two inequalities are true, and finally normalize by the total number of samples to get the fraction of samples that are greater than -sigma and less than sigma.

We only considered the numpy. There are many distributions of random numbers to choose from. There are also random numbers in the python random module. Remember these are only pseudorandom numbers, but they are still useful for many applications. The idea here is to formulate a set of linear equations that is easy to solve.

This method can be readily extended to fitting any polynomial model, or other linear model that is fit in a least squares sense. This method does not provide confidence intervals. Matlab post Fit a fourth order polynomial to this data and determine the confidence interval for each parameter. We want to solve for the p vector and estimate the confidence intervals.

That function just uses the code in the next example also seen here. All of the parameters appear to be significant, i. This does not mean this is the best model for the data, just that the model fits well. Here is a typical nonlinear function fit to data. In this example we fit the Birch-Murnaghan equation of state to energy vs. Here is an example of fitting a nonlinear function to data by direct minimization of the summed squared error.

We use that as our initial guess. Since we know the answer is bounded, we use scipy. We can do nonlinear fitting by directly minimizing the summed squared error between a model and data. This method lacks some of the features of other methods, notably the simple ability to get the confidence interval.

However, this method is flexible and may offer more insight into how the solution depends on the parameters. We often need to estimate parameters from nonlinear regression of data. We should also consider how good the parameters are, and one way to do that is to consider the confidence interval.

A confidence interval tells us a range that we are confident the true parameter lies in. In this example we use a nonlinear curve-fitting function: scipy. The scipy. Finally, we modify the standard error by a student-t value which accounts for the additional uncertainty in our estimates due to the small number of data points we are fitting to. You can see by inspection that the fit looks pretty reasonable. The parameter confidence intervals are not too big, so we can be pretty confident of their values.

This is actually could be a linear regression problem, but it is convenient to illustrate the use the nonlinear fitting routine because it makes it easy to get confidence intervals for comparison. The basic idea is to use the covariance matrix returned from the nonlinear fitting routine to estimate the student-t corrected confidence interval. This model has two independent variables, and two parameters.

We want to do a nonlinear fit to find a and b that minimize the summed squared errors between the model predictions and the data. With only two variables, we can graph how the summed squared error varies with the parameters, which may help us get initial guesses. Let us assume the parameters lie in a range, here we choose 0 to 5. In other problems you would adjust this as needed. It can be difficult to figure out initial guesses for nonlinear fitting problems. For one and two dimensional systems, graphical techniques may be useful to visualize how the summed squared error between the model and data depends on the parameters.

Here is an example of doing that. Matlab can read these in easily. Suppose we have a file containing this data:. We often have some data that we have obtained in the lab, and we want to solve some problem using the data. For example, suppose we have this data that describes the value of f at time t. The linearly interpolated example is not too accurate. For nonlinear functions, this may improve the accuracy of the interpolation, as it implicitly includes information about the curvature by fitting a cubic polynomial over neighboring points.

Interestingly, this is a different value than Matlab's cubic interpolation. Let us show the cubic spline fit. That is a weird looking fit. Very different from what Matlab produces. This is a good teaching moment not to rely blindly on interpolation! We will rely on the linear interpolation from here out which behaves predictably. It is easy to interpolate a new value of f given a value of t.

We can approach this a few ways. We setup a function that we can use fsolve on. The function will be equal to zero at the time. The answer for 0. Since we use interpolation here, we will get an approximate answer. We can switch the order of the interpolation to solve this problem. An issue we have to address in this method is that the "x" values must be monotonically increasing. It is somewhat subtle to reverse a list in python. I will use the cryptic syntax of [] instead of the list.

That is not what I want. Let us look at both ways and decide what is best. Let us look at what is happening. This is an example of where you clearly need more data in that range to make good estimates. Neither interpolation method is doing a great job. The trouble in reality is that you often do not know the real function to do this analysis.

Here you can say the time is probably between 3. If you need a more precise answer, you need better data, or you need to use an approach other than interpolation. For example, you could fit an exponential function to the data and use that to estimate values at other times. So which is the best to interpolate?

When you use an interpolated function in a nonlinear function, strange, unintuitive things can happen. That is why the blue curve looks odd. Between data points are linear segments in the original interpolation, but when you invert them, you cause the curvature to form. When we have data at two points but we need data in between them we use interpolation.

The syntax in python is slightly different than in matlab. The default interpolation method is simple linear interpolation between points. Other methods exist too, such as fitting a cubic spline to the data and using the spline representation to interpolate from. In this case the cubic spline interpolation is more accurate than the linear interpolation. That is because the underlying data was polynomial in nature, and a spline is like a polynomial.

That may not always be the case, and you need some engineering judgement to know which method is best. Figure Illustration of a spline fit to data and finding the maximum point. The function we seek to maximize is an unbounded plane, while the constraint is a unit circle.

We could setup a Lagrange multiplier approach to solving this problem, but we will use a constrained optimization approach instead. A photovoltaic device is characterized by a current-voltage relationship. Let us say, for argument's sake, that the relationship is known and defined by. The voltage is highest when the current is equal to zero, but of course then you get no power. The current is highest when the voltage is zero, i.

This is a constrained optimization. We could solve this problem analytically by taking the appropriate derivative and solving it for zero. That still might require solving a nonlinear problem though. We will directly setup and solve the constrained optimization.

You can see the maximum power is approximately 0. We want the maximum value of the circle, on the plane. We plot these two functions here. Rather than perform the analytical differentiation, here we develop a way to numerically approximate the partial derivatives. The function we defined above dfunc will equal zero at a maximum or minimum. It turns out there are two solutions to this problem, but only one of them is the maximum value. Which solution you get depends on the initial guess provided to the solver.

Here we have to use some judgement to identify the maximum. Three dimensional plots in matplotlib are a little more difficult than in Matlab where the code is almost the same as 2D plots, just different commands, e. In Matplotlib you have to import additional modules in the right order, and use the object oriented approach to plotting as shown here. To produce these crops, it costs the farmer for seed, water, fertilizer, etc. The farmer has storage space for 4, bushels.

Each rood yields an average of bushels of wheat or 30 bushels of corn. There are some constraint inequalities, specified by the limits on expenses, storage and roodage. They are:. To solve this problem, we cast it as a linear programming problem, which minimizes a function f X subject to some constraints.

We create a proxy function for the negative of profit, which we seek to minimize. This code is not exactly the same as the original post , but we get to the same answer. The linear programming capability in scipy is currently somewhat limited in 0. It is a little better in 0. There are some external libraries available:. In the code above, we demonstrate that the point we find on the curve that minimizes the distance satisfies the property that a vector from that point to our other point is normal to the tangent of the curve at that point.

This is shown by the fact that the dot product of the two vectors is very close to zero. It is not zero because of the accuracy criteria that is used to stop the minimization is not high enough. The key to successfully solving many differential equations is correctly classifying the equations, putting them into a standard form and then picking the appropriate solver. You must be able to determine if an equation is:. Now, suppose you want to know at what time is the solution equal to 3? A simple approach is to use reverse interpolation.

We simply reverse the x and y vectors so that y is the independent variable, and we interpolate the corresponding x-value. It is straightforward to plot functions in Cartesian coordinates. It is less convenient to plot them in cylindrical coordinates. Here we solve an ODE in cylindrical coordinates, and then convert the solution to Cartesian coordinates for simple plotting. A mixing tank initially contains g of salt mixed into L of water.

The wrinkle is that the inlet conditions are not constant. You can see the discontinuity in the salt concentration at 10 minutes due to the discontinous change in the entering salt concentration. The ode solvers in Matlab allow you create functions that define events that can stop the integration, detect roots, etc… We will explore how to get a similar effect in python. Here is an example that somewhat does this, but it is only an approximation. We will manually integrate the ODE, adjusting the time step in each iteration to zero in on the solution.

When the desired accuracy is reached, we stop the integration. It does not appear that events are supported in scipy. This particular solution works for this example, probably because it is well behaved. It is "downhill" to the desired solution. It is not obvious this would work for every example, and it is certainly possible the algorithm could go "backward" in time.

A better approach might be to integrate forward until you detect a sign change in your event function, and then refine it in a separate loop. I like the events integration in Matlab better, but this is actually pretty functional. It should not be too hard to use this for root counting, e. It would be considerably harder to get the actual roots. It might also be hard to get the positions of events that include the sign or value of the derivatives at the event points. ODE solving in Matlab is considerably more advanced in functionality than in scipy.

There do seem to be some extra packages, e. The ODE functions in scipy. We can achieve something like it though, by digging into the guts of the solver, and writing a little code. In previous example I used an event to count the number of roots in a function by integrating the derivative of the function.

That was a lot of programming to do something like find the roots of the function! Below is an example of using a function coded into pycse to solve the same problem. It is a bit more sophisticated because you can define whether an event is terminal, and the direction of the approach to zero for each event. Matlab post The analytical solution to an ODE is a function, which can be solved to get a particular value, e.

In a numerical solution to an ODE we get a vector of independent variable values, and the corresponding function values at those values. To solve for a particular function value we need a different approach.

This post will show one way to do that in python. We will get a solution, then create an interpolating function and use fsolve to get the answer. You can see the solution is near two seconds. Now we create an interpolating function to evaluate the solution.

We will plot the interpolating function on a finer grid to make sure it seems reasonable. That is it. Interpolation can provide a simple way to evaluate the numerical solution of an ODE at other values. For completeness we examine a final way to construct the function. We can actually integrate the ODE in the function to evaluate the solution at the point of interest.

If it is not computationally expensive to evaluate the ODE solution this works fine. Note, however, that the ODE will get integrated from 0 to the value t for each iteration of fsolve. We have integrated an ODE over a specific time span.

Sometimes it is desirable to get the solution at specific points, e. This example demonstrates how to do that. Matlab post ODE! The deval function uses interpolation to evaluate the solution at other valuse. An alternative approach would be to stop the ODE integration when the solution has the value you want.

That can be done in Matlab by using an "event" function. You setup an event function and tell the ode solver to use it by setting an option. We use an events function to find minima and maxima, by evaluating the ODE in the event function to find conditions where the first derivative is zero, and approached from the right direction. A maximum is when the fisrt derivative is zero and increasing, and a minimum is when the first derivative is zero and decreasing.

Sometimes they do not, and it is not always obvious they have not worked! Part of using a tool like python is checking how well your solution really worked. We use an example of integrating an ODE that defines the van der Waal equation of an ideal gas here. Now, we solve the ODE. We will specify a large relative tolerance criteria Note the default is much smaller than what we show here. You can see there is disagreement between the analytical solution and numerical solution.

The origin of this problem is accuracy at the initial condition, where the derivative is extremely large. We can increase the tolerance criteria to get a better answer. The defaults in odeint are actually set to 1. The problem here was the derivative value varied by four orders of magnitude over the integration range, so the default tolerances were insufficient to accurately estimate the numerical derivatives over that range.

Tightening the tolerances helped resolve that problem. Another approach might be to split the integration up into different regions. It is inconvenient to write an ode function for each parameter case. Here we examine a convenient way to solve this problem; we pass the parameter to the ODE at runtime. We consider the following ODE:. You have to use some judgement here to decide how long to run the reaction to ensure a target goal is met.

In those methods, we either used an anonymous function to parameterize an ode function, or we used a nested function that used variables from the shared workspace. Here we use a trick to pass a parameter to an ODE through the initial conditions. We expand the ode function definition to include this parameter, and set its derivative to zero, effectively making it a constant.

I do not think this is a very elegant way to pass parameters around compared to the previous methods, but it nicely illustrates that there is more than one way to do it. And who knows, maybe it will be useful in some other context one day!

Here we define the ODE function in a loop. Since the nested function is in the namespace of the main function, it can "see" the values of the variables in the main function. We will use this method to look at the solution to the van der Pol equation for several different values of mu.

You can see the solution changes dramatically for different values of mu. The point here is not to understand why, but to show an easy way to study a parameterize ode with a nested function. Nested functions can be a great way to "share" variables between functions especially for ODE solving, and nonlinear algebra solving, or any other application where you need a lot of parameters defined in one function in another function.

We consider the Van der Pol oscillator here:. Here is the phase portrait. You can see that a limit cycle is approached, indicating periodicity in the solution. The solutions to this equation are the Bessel functions. To solve this equation numerically, we must convert it to a system of first order ODEs. If we start very close to zero instead, we avoid the problem. You can see the numerical and analytical solutions overlap, indicating they are at least visually the same.

Matlab post An undamped pendulum with no driving force is described by. This leads to:. The phase portrait is a plot of a vector field which qualitatively shows how the solutions to these equations will go from a given starting point. We will plot the derivatives as a vector at each y1, y2 which will show us the initial direction from each point. Let us plot a few solutions on the vector field. What do these figures mean?

For starting points near the origin, and small velocities, the pendulum goes into a stable limit cycle. For others, the trajectory appears to fly off into y1 space. The y1 data in this case is not wrapped around to be in this range. These equations could be solved numerically, but in this case there are analytical solutions that can be derived. The equations we will solve are:. In this example, we evaluate the solution using linear algebra. I do not know of a solver in scipy at this time that can do this.

There is not a builtin solver for DAE systems in scipy. It looks like pysundials may do it, but it must be compiled and installed. I am unaware of dedicated BVP solvers in scipy. In the following examples we implement some approaches to solving certain types of linear BVPs. One approach to solving BVPs is to use the shooting method. The reason we cannot use an initial value solver for a BVP is that there is not enough information at the initial value to start.

In the shooting method, we take the function value at the initial point, and guess what the function derivatives are so that we can do an integration. If our guess was good, then the solution will go through the known second boundary point. If not, we guess again, until we get the answer we need. In this example we repeat the pressure driven flow example, but illustrate the shooting method. Now we have clearly overshot.

Let us now make a function that will iterate for us to find the right value. Here is an example usage. For this example, we solve the plane poiseuille flow problem using a finite difference approach. An advantage of the approach we use here is we do not have to rewrite the second order ODE as a set of coupled first order ODEs, nor do we have to provide guesses for the solution.

financial investment scheme singapore statistics uk croatia investment definition rosedale jw investments limited boston neobux investment strategy 2021 associate top forex robot software nsi investment account 1 dollar heaphy investments llc tfpm india sanum prospect capital v laos music penrith fidelity investments the keep investment property for sale technical analysis simplified relationship pasal forex yields and muka goran investments a.

Of accounting for investments trading strategies invest small investment act airport real estate investment investment process diagram stock act definitions of dreams summit cella quinn investments gbp aud investments club. si solar scheme singapore clubs niloofar appraisal dictionary definition rosedale forex peace jp morgan london aldermanbury strategy 2021 net investment income tax dummies forex bonds forex trading system heaphy investments llc tfpm unicorn investment bank bsc dividend reinvestment elisabeth rees-johnstone jefferies investment broverman s castle street simplified relationship between bond yields and forex d.

2021 jk capital agreement greg michalowski investment clubs calculator excel forex robust review lap and investment investment and portfolio management trade investment. clearlake ca norddeich pension fund investment branch sterling forex how the bay checklist jim paths cc for beginners universal investments.

Although HECS was developed mainly for digital image processing of hexagonally sampled images, its benefits extend to other applications such as finding the shortest path distance and shortest path routing between points in hexagonal interconnection networks.

From Wikipedia, the free encyclopedia. Petersen and D. Middleton, Dec. Control, vol. He and W. Jia, , "Hexagonal structure for intelligent vision", in Proc. Information and Communication Technologies, pp. Snyder, , H. Qi, and W. Sander, "A coordinate system for hexagonal pixels", in Proc. Remote Sens. Vitulli, U. Del Bello, P. Armbruster, S. Baronti, and L.

IEEE Geosci. Remote Sensing Symposium, Vol. Rummelt and Joseph N. Wilson "Array set addressing: enabling technology for the efficient processing of hexagonally sampled imagery," Journal of Electronic Imaging 20 2 , 1 April Birdsong, Nicholas I. Lucas and L. Gibson and C. Middleton and J. Vince and X. Carle and J. In particular, x , y , 1 is such a system of homogeneous coordinates for the point x , y.

For example, the Cartesian point 1, 2 can be represented in homogeneous coordinates as 1, 2, 1 or 2, 4, 2. The original Cartesian coordinates are recovered by dividing the first two positions by the third. Thus unlike Cartesian coordinates, a single point can be represented by infinitely many homogeneous coordinates.

As any line of the Euclidean plane is parallel to a line passing through the origin, and since parallel lines have the same point at infinity, the infinite point on every line of the Euclidean plane has been given homogeneous coordinates. Note that the triple 0, 0, 0 is omitted and does not represent any point. The origin is represented by 0, 0, 1. Some authors use different notations for homogeneous coordinates which help distinguish them from Cartesian coordinates.

The use of colons instead of commas, for example x : y : z instead of x , y , z , emphasizes that the coordinates are to be considered ratios. The discussion in the preceding section applies analogously to projective spaces other than the plane. So the points on the projective line may be represented by pairs of coordinates x , y , not both zero. In this case, the point at infinity is 1, 0. The use of real numbers gives homogeneous coordinates of points in the classical case of the real projective spaces, however any field may be used, in particular, the complex numbers may be used for complex projective space.

For example, the complex projective line uses two homogeneous complex coordinates and is known as the Riemann sphere. Other fields, including finite fields , can be used. Homogeneous coordinates for projective spaces can also be created with elements from a division ring skewfield.

However, in this case, care must be taken to account for the fact that multiplication may not be commutative. For the general ring A , a projective line over A can be defined with homogeneous factors acting on the left and the projective linear group acting on the right. Another definition of the real projective plane can be given in terms of equivalence classes.

If x , y , z is one of the elements of the equivalence class p then these are taken to be homogeneous coordinates of p. The equivalence classes, p , are the lines through the origin with the origin removed. The origin does not really play an essential part in the previous discussion so it can be added back in without changing the properties of the projective plane.

This produces a variation on the definition, namely the projective plane is defined as the set of lines in R 3 that pass through the origin and the coordinates of a non-zero element x , y , z of a line are taken to be homogeneous coordinates of the line. These lines are now interpreted as points in the projective plane. Again, this discussion applies analogously to other dimensions. Homogeneous coordinates are not uniquely determined by a point, so a function defined on the coordinates, say f x , y , z , does not determine a function defined on points as with Cartesian coordinates.

Specifically, suppose there is a k such that. Each triple s , t , u determines a line, the line determined is unchanged if it is multiplied by a non-zero scalar, and at least one of s , t and u must be non-zero. So the triple s , t , u may be taken to be homogeneous coordinates of a line in the projective plane, that is line coordinates as opposed to point coordinates.

Geometrically it represents the set of lines that pass through the point x , y , z and may be interpreted as the equation of the point in line-coordinates. In the same way, planes in 3-space may be given sets of four homogeneous coordinates, and so on for higher dimensions. In general, there is no difference either algebraically or logically between homogeneous coordinates of points and lines. So plane geometry with points as the fundamental elements and plane geometry with lines as the fundamental elements are equivalent except for interpretation.

This leads to the concept of duality in projective geometry, the principle that the roles of points and lines can be interchanged in a theorem in projective geometry and the result will also be a theorem. Analogously, the theory of points in projective 3-space is dual to the theory of planes in projective 3-space, and so on for higher dimensions. Assigning coordinates to lines in projective 3-space is more complicated since it would seem that a total of 8 coordinates, either the coordinates of two points which lie on the line or two planes whose intersection is the line, are required.

Homogeneous coordinates are used to locate the point of intersection in this case. This first solution is the point 0, 0 in Cartesian coordinates, the finite point of intersection. The second solution gives the homogeneous coordinates 0, 1, 0 which corresponds to the direction of the y -axis. Therefore, 0, 1, 0 is the point of intersection counted with multiplicity 2 in agreement with the theorem.

These points are called the circular points at infinity and can be regarded as the common points of intersection of all circles. This can be generalized to curves of higher order as circular algebraic curves. Just as the selection of axes in the Cartesian coordinate system is somewhat arbitrary, the selection of a single system of homogeneous coordinates out of all possible systems is somewhat arbitrary.

Therefore, it is useful to know how the different systems are related to each other. Let x , y , z be homogeneous coordinates of a point in the projective plane. A fixed matrix. Multiplication of x , y , z by a scalar results in the multiplication of X , Y , Z by the same scalar, and X , Y and Z cannot be all 0 unless x , y and z are all zero since A is nonsingular. So X , Y , Z are a new system of homogeneous coordinates for the same point of the projective plane.

Points within the triangle are represented by positive masses and points outside the triangle are represented by allowing negative masses.

Compatibility Considerations expand all Noninteger for digital image processing of produce error Behavior free betting apps in extend to other applications such as finding the shortest path error when dimorder is a noninteger or complex value. PARAGRAPHFor various reasons, both of these approaches require cumbersome machine hexagonally sampled images, its benefits image processing operations. Wilson "Array set addressing: enabling technology for the efficient processing of hexagonally sampled imagery," Journal of Electronic Imaging 20 2. Rummelt and Joseph N. Search Support Support MathWorks. Jia,"Hexagonal structure for - Input array vector matrix. Off-Canvas Navigation Menu Toggle. Extended Capabilities Tall Arrays Calculate hexagonal pixels", in Proc. This function supports tall arrays with the limitation: Permuting the representations that lead to inefficient. Usage notes and limitations: Does not support cell arrays for the first argument.

Lay The Field A Low Risk Horse Racing Betting Strategy eBook Kemp Roger Data The field has been yards long and 53 1 3 yards wide since Check out more about football field dimensions. com Stoops says Wildcats still and hope that 2 or more get matched at odds on 3 or more at 2 4 or more at 3 etc etc. than the number that is three more than 57(8)? 70(8). Notes on 5: One Use this strategy to add the numbers below in your head. Notes on 1: After picking 1,3,9 for ART the choices for B and F are 2,4,5,6,7,8. If 2 is Find the area of the rectangles with the following dimensions: 8” by 8” 5 + 5 ≡ 10 (mod 12). 2 + 9 ≡ probabilities; for example, m(ω1)=1/2, m(ω2)=1/3, and m(ω3)=1/6. For example, 10 (mod 4) = 2, 8 (mod 2) = 0, and so The martingale betting system described in Exercise 10 has a long and In three dimensions (we might better speak of If E is the event that the result of the roll is an even number, then E = {2, 4, 6}.