Ripples In Mathematics The Discrete Wavelet Transform Pdf In Excel

Applying the Haar Wavelet Transform to Time Series InformationApplying the Haar Wavelet Transform to Time Series InformationContentsLinks to sub-pagesThis was the first web page I wrote on Wavelets. From this seed grewother web pages which discuss a variety of wavelet relatedtopics. For a 'table of contents' see. This web page applies the wavelet transform to atime series composed of stock market close prices. Later web pagesexpand on this work in a variety of areas (e.g., compression, spectralanalysis and forecasting).When I started out I thought that I would implement the Haar waveletand that some of my colleagues might find it useful.

I did not expectsignal processing to be such an interesting topic. Nor did Iunderstand who many different areas of computer science, mathematics,and quantitative finance would be touched by wavelets. I kept findingthat 'one thing lead to another', making it difficult to find alogical stopping place. This wandering path of discovery on my partalso accounts for the somewhat organic growth of these web pages. Ihave tried to tame this growth and organize it, but I fear that itstill reflects the fact that I did not know where I was going when Istarted.The Java code published along with this web page reflect the firstwork I did on wavelets.

More sophisticated, lifting scheme based,algorithms, implemented in Java can be found on other web pages. Thewavelet lifting scheme code, published on other web pages, is simplerand easier to understand. The wavelet lifting scheme also provides anelegant and powerful framework for implementing a range of waveletalgorithms.In implementing wavelet packet algorithms, I switched from Java toC. The wavelet packet algorithm I used is simpler and more elegantusing C's operator overloading features. C also supports genericdata structures (templates), which allowed me to implement a genericclass hierarchy for wavelets. This code includes several differentwavelet algoriths, including Haar, linear interpolation and DaubechiesD4.Like the wavelet algorithms, the financial modeling done hererepresents very early work.

When I started working on these web pagesI had no experience with modeling financial time series. The workdescribed on this web page lead to more intensive experiments withwavelet filters in financial models, which I continue to work on. Onthis web page I use stock market close prices. In financial modelingone usually uses returns, since what you are trying to predict isfuture return.I became interested in wavelets by accident.

I was working on softwareinvolved with financial time series (e.g., equity open and closeprice), so I suppose that it was an accident waiting to happen. I wasreading the February 2001 issue of WIRED magazine when I saw the graphincluded below. Every month WIRED runs various graphicvisualizations of financial data and this was one of them.If stock prices do indeed factor in all knowable information, acomposite price graph should proceed in an orderly fashon, as newinformation nudges perceived value against the pull of establishedtendencies. Wavelet analysis, widely used in communications toseparate signal (patterned motion) from noise (random activity),suggests otherwise.This image shows the results of running a Haar transform - thefundamental wavelet formula - on the daily close of the Dow and NASDQsince 1993. The blue mountains constitute signal. The embedded redspikes represent noise, of which the yellow line follows a 50-daymoving average.Noise, which can be regarded as investor ignorance, has risen alongwith the value of both indices. But while noise in the Dow has grown500 percent on average, NASDAQ noise has ballooned 3,000 percent, faroutstripping NASDAQ's spectacular 500-percent growth during the sameperiod.

Most of this increase has occurred since 1997, with anextraordinary surge since January 2000. Perhaps there was a Y2K glichafter all - one that derailed not operating systems and CPUs, but - investor psychology. Clem Chambers (clemc@advfn.com).Graph and quote from WIRED Magazine, February 2001, page 176I am a Platonist. I believe that, in the abstract, there is truth,but that we can never actually reach it. We can only reach anapproximation, or a shadow of truth. Modern science expresses this asHeisenberg uncertainty.A Platonist view of a financial time series is that there is a 'true'time series that is obscured to some extent by noise. For example, a closeprice or bid/ask time series for a stock moves on the basis of thesupply and demand for shares.

In the case of a bid/ask time series,the supply/demand curve will be surrounded by the noise created byrandom order arrival. If, somehow, the noise could be filtered out,we would see the 'true' supply/demand curve. Software which uses thisinformation might be able to do a better job because it would not beconfused by false movements created by noise.The WIRED graph above suggests that wavelet analysis can be used tofilter a financial time series to remove the associated noise. Ofcourse there is a vast area that is not addressed by the WIRED quote.What, for example, constitutes noise? What are wavelets and Haarwavelets? Why are wavelets useful in analyzing financial time series?When I saw this graph I knew answers to none of these questions.The analysis provided in the brief WIRED paragraph is shallow as well.Noise in the time series increases with trading volume. In order toclaim that noise has increased, the noise should be normalized fortrading volume.Reading is a dangerous thing.

It can launch you off into strangedirections. I moved from California to Santa Fe, New Mexico because Iread a. Thatone graph in WIRED magazine launched me down a path that I spentmany months following.

Like any adventure, I'm not sure if I wouldhave embarked on this one if I had known how long and, at times,difficult, the journey would be.Years ago, when it first came out, I bought a copy of the book TheWorld According to Wavelets by Barbara Hubbard, on the basis of areview I read in the magazine Science. The book sat on myshelf unread until I saw the WIRED graph.Wavelets have been somewhat of a fad, a buzzword that people have thrownaround. Barbara Hubbard started writing The World According toWavelets when the wavelet fad was starting to catch fire. Sheprovides an interesting history of how wavelets developed in themathematical and engineering worlds. She also makes a valiant attemptto provide an explanation of what the wavelet technique is.Ms. Hubbard is a science writer, not a mathematician, but she mastereda fair amount of basic calculus and signal processing theory (which Iadmire her for). When she wrote The World According toWavelets there were few books on wavelets and no introductorymaterial.

Although I admire Barbara Hubbard's heroic effort, I hadonly a surface understanding of wavelets after reading The WorldAccording to Wavelets.There is a vast literature on wavelets and their applications. Fromthe point of view of a software engineer (with only a year of collegecalculus), the problem with the wavelet literature is that it haslargely been written by mathematicians, either for othermathematicians or for students in mathematics. I'm not a member ofeither group, so perhaps my problem is that I don't have a fluentgrasp of the language of mathematics.

I certianly feel this when everI read journal articles on wavelets. However, I have tried toconcentrate on books and articles that are explicitly introductory andtutorial.

Discrete

Even these have proven to be difficult.The first chapter of the book Wavelets Made Easy by YvesNievergelt starts out with an explaination of Haar wavelets (these arethe wavelets used to generate the graph published in WIRED). Thischapter has numerous examples and I was able to understand andimplement Haar wavelets from this material (links to my Java code forHaar wavelets can be found below). A later chapter discusses theDaubechies wavelet transform. Unfortunately, this chapter ofWavelets Made Easy does not seem to be as good as the materialon Haar wavelets. There appear to be a number of errors in thischapter and implementing the algorithm described by Nievergelt doesnot result in a correct wavelet transform.

Among other things, thewavelet coefficients for the Daubechies wavelets seem to be wrong. Myweb page on the Daubechies wavelet transform can be found. Thebook Ripples in Mathematics (see the references at the end ofthe web page) is a better reference.There is a vast literature on wavelets. This includes thousands ofjournal articles and many books.

The books on wavelets range fromrelatively introductory works like Nievergelt's Wavelets MadeEasy (which is still not light reading) to books that areaccessable only to graduate students in mathematics. There is also agreat deal of wavelet material on the Web. This includes a number oftutorials (see,below).Given the vast literature on wavelets, there is no need for yetanother tutorial. But it might be worth while to summarize my view ofwavelets as they are applied to 1-D signals or time series (an imageis 2-D data).

A time series is simply a sample of a signal or arecord of something, like temperature, water level or market data(like equity close price).Wavelets allow a time series to be viewed in multiple resolutions.Each resolution reflects a different frequency. The wavelet techniquetakes averages and differences of a signal, breaking the signal downinto spectrum. All the wavelet algorithms that I'm familiar with workon time series a power of two values (e.g., 64, 128, 256.). Eachstep of the wavelet transform produces two sets of values: a set ofaverages and a set of differences (the differences are referred to aswavelet coefficients). Each step produces a set of averages andcoefficients that is half the size of the input data. For example, ifthe time series contains 256 elements, the first step will produce 128averages and 128 coefficients.

The averages then become the input forthe next step (e.g., 128 averages resulting in a new set of 64averages and 64 coefficients). This continues until one average andone coefficient (e.g., 2 0) is calculated.The average and difference of the time series is made across a windowof values.

Most wavelet algorithms calculate each new average anddifference by shifting this window over the input data. For example,if the input time series contains 256 values, the window will beshifted by two elements, 128 times, in calculating the averages anddifferences. The next step of the calculation uses the previous setof averages, also shifting the window by two elements. This has theeffect of averaging across a four element window.

Logically, thewindow increases by a factor of two each time.In the wavelet literature this tree structured recursive algorithm isreferred to as a pyramidal algorithm.The power of two coefficient (difference) spectrum generated by a waveletcalculation reflect change in the time series at various resolutions.The first coefficient band generated reflects the highest frequencychanges. Each later band reflects changes at lower and lower frequencies.There are an infinite number of wavelet basis functions. The morecomplex functions (like the Daubechies wavelets) produce overlappingaverages and differences that provide a better average than the Haarwavelet at lower resolutions. However, these algorithms are morecomplicated.Every field of specialty develops its own sub-language. This iscertainly true of wavelets. I've listed a few definitions here which,if I had understood their meaning would have helped me in mywanderings through the wavelet literature.WaveletA function that results in a set of high frequency differences, orwavelet coefficients. Interms the wavelet calculates the difference between a prediction andan actual value.If we have a data sample s i, s i+1,s i+2.

The Haar wavelet equations isWhere c i is the wavelet coefficient.The waveletuses a slightly different expression for the Haar wavelet:.Scaling FunctionThe scaling function produces a smoother version of the data set,which is half the size of the input data set. Wavelet algorithms arerecursive and the smoothed data becomes the input for the next step ofthe wavelet transform. The Haar wavelet scaling function iswhere a i is a smoothed value.The Haar transform preserves the average in the smoothed values. Thisis not true of all wavelet transforms.High pass filterIn digital signal processing (DSP) terms, the wavelet function is ahigh pass filter.

Click Run to start the installation immediately. Word 2010 portable download gratis. .To start the download, click the Download button and then do one of the following, or select another language from Change Language and then click Change.

A high pass filter allows the high frequencycomponents of a signal through while suppressing the low frequencycomponents. For example, the differences that are captured by theHaar wavelet function represent high frequency change between an oddand an even value.Low pass filterIn digital signal processing (DSP) terms, the scaling function is alow pass filter. A low pass filter suppresses the high frequencycomponents of a signal and allows the low frequency componentsthrough. The Haar scaling function calculates the average of an evenand an odd element, which results in a smoother, low pass signal.Orthogonal (or Orthonormal) TransformThe definition of orthonormal (a.k.a. Orthogonal) tranforms inWavelet Methods for Time Series Analysisby Percival and Walden, Cambridge University Press, 2000, Chaper 3,section 3.1, is one of the best I've seen. I've quoted this below:Orthonormal transforms are of interst because they can be used tore-express a time series in such a way that we can easily reconstructthe series from its transform. In a loose sense, the 'information' inthe transform is thus equivalent to the 'information' is the originalseries; to put it another way, the series and its transform can beconsidered to be two representations of the same mathematical entity.In terms of wavelet transforms this means that the original timeseries can be exactly reconstructed from the time series average andcoefficients generated by an orthogonal (orthonormal) wavelettransform.Signal estimationThis is also referred to as 'de-noising'.

Signal estimationalgorithms attempt to characterize portions of the time series andremove those that fall into a particular model of noise.These Web pages publish some heavily documented Java source code forthe Haar wavelet transform. Books like Wavelets Made Easyexplain some of the mathematics behind the wavelet transform. I havefound, however, that the implemation of this code can be at least asdifficult as understanding the wavelet equations.

For example, thein-place Haar wavelet transform produces wavelet coefficients in abutterfly pattern in the original data array. The Java sourcepublished here includes code to reorder the butterfly into coefficientspectrums which are more useful when it comes to analyzing the data.Although this code is not large, it took me most of a Saturday toimplement the code to reorder the butterfly data pattern.The wavelet Lifting Scheme, developed by Wim Sweldens and othersprovides a simpler way to look as many wavelet algorithms. I startedto work on Lifting Scheme wavelet implementations after I hadwritten this web page and developed the software. The Haarwavelet code is much simpler when expressed in the lifting scheme.See my web page.The link to the Java source download Web page is below.There are a variety of wavelet analysis algorithms.

Different waveletalgorithms are appplied depending on the nature of the data analyzed.The Haar wavelet, which is used here is very fast and works well forthe financial time series (e.g., the close price for a stock).Financial time series are non-stationary (to use a signal processingterm). This means that even within a window, financial time seriescannot be described well by a combination of sin and cos terms. Norare financial time series cyclical in a predictable fashion (unlessyou believe in ).Financial time series lend themselves to Haar wavelet analysis sincegraphs of financial time series tend to jagged, without a lot ofsmooth detail.

For example, the graph below shows the daily closeprice for Applied Materials over a period of about two years.Daily close price for Applied Materials (symbol: AMAT), 12/18/97 to12/30/99.The Haar wavelet algorithms I have implemented work on data thatconsists of samples that are a power of two. In this case there are512 samples.There are a wide variety of popular wavelet algorithms, includingwavelets, Mexican Hat wavelets and Morlet wavelets. These waveletalgorithms have the advantage of better resolution for smoothlychanging time series. But they have the disadvantage of being moreexpensive to calculate than the Haar wavelets.

The higer resolutionprovided by these wavlets is not worth the cost for financial timeseries, which are characterized by jagged transitions.The Haar wavelet algorithms published here are applied to time serieswhere the number of samples is a power of two (e.g., 2, 4, 8, 16, 32,64.) The Haar wavelet uses a rectangular window to sample thetime series. The first pass over the time series uses a window width of two.The window width is doubled at each step until the window encompassesthe entire time series.Each pass over the time series generates a new time series and a set ofcoefficients. The new time series is the average of the previous time seriesover the sampling window. The coefficients represent the averagechange in the sample window. For example, if we have a time seriesconsisting of the values v 0, v 1.

V n, a new time series, with half as many points iscalculated by averaging the points in the window. If it is the firstpass over the time series, the window width will be two, so two points willbe averaged:for (i = 0; i.

This paper presents a new design for the implementation of Vedic mathematics based discrete wavelet transform for biomedical signal processing. The design of the low power architecture using alternate devices is the background of the work. The DWT architecture consists of the adder, multiplier, Multiply Accumulate (MAC) unit and RAM or ROM to store the co-efficients. The existing Complementary Metal Oxide Semiconductor based design suffers from leakage. The proposed FinFET and CNTFET technology will overcome the problem faced in CMOS technology. The processor core of the system on chip (SoC) designed using Vedic mathematics sutras. The efficiency of Vedic mathematics and advances of low power VLSI is combined in this paper.

The CNTFET design reduces the power by about 95% and has controllability of the threshold voltage. The design is carried out in 32 nm FinFET technologyThe design is mainly focused on the complete Processor Core block implemented using a MAC with a Vedic multiplier using FinFET technology.

The experiments were carried out using Synopsis HSpice. Previous article in issue. Next article in issue.

Senthil Kumar was born in Tamil Nadu, India. He received the B.E. Degree in Electronics & Communication Engineering and M.Tech. Degree in VLSI systems from Bharathiyar University and National Institute of Technology (NIT), Trichy in 1997 and 2005, respectively.

He completed the Ph.D (Low power VLSI Design) Degree in the year 2017 at Anna university Chennai. He has 18 years of teaching and research experience in reputed academic institutions.

He is presently working as a Professor in the Department of Electronics and Communication Engineering at Malla Reddy college of Engineering and Technology, Hyderabad. He is a Life Member of International Association of Engineers (IAENG) and MIETE.

His research interests include low-power circuit design, Semiconductor devices and power management circuit design. Ravindrakumar was born in Tamil Nadu, India. He received the B.E. Degree in Electronics & Communication engineering and M.Tech. Degree in Electronics Design & Technology From CIT, Coimbatore and NIT, Calicut in 2003 and 2008, respectively. He completed the Ph.D Degree in the year 2017 at Anna University Chennai.

He is presently working as a Associate Professor in the Department of Biomedical Engineering at Sri Shakthi Institute Of Engineering And Technology, Coimbatore, India. He has 14 years of teaching experience and 7 years of Industry consultancy.

He is the founder and director of IRRD AUTOMATONS. His research interests include low power circuit design, Semiconductor devices and Biomedical Signal Processing. Nithya was born in Tamil Nadu, India. She received the B.E. Degree in Electronics & Communication engineering and M.E. Degree in Computer Science and Engineering from Anna University Chennai and Anna University Coimbatore in 2007 and 2010, respectively.

She is presently heading the Nanoelectronics and Integration Division of IRRD Automatons,Karur-Erode, India. She is the Co-founder of IRRD Automatons. She has 7 years of teaching experience and 5 years of Industry consultancy. Her research interests include signal processing, VLSI design and communication systems.

Kousik was born in Tamilnadu, INDIA in 1982. He received the M.C.A., degree in Computer Applications from the Anna University, Chennai, in 2006. The M.Phil., degree in Computer Science from the PRIST University, Thangjavur, in 2010 and the Ph.D. Degree in Computer Science from Bharathiar University, Coimbatore, in 2016. From 2006 to 2013, he has been an Assistant Professor with the Computer Applications Department, in private engineering colleges, affiliated to Anna University, Chennai. Now he is working as an associate professor, Galgotias University, Greater Nodia, Uttarpradesh, India. He is doing consultancy services for research scholars.

Integral Transform

George Mason University

He is the author of more than 15 articles. His-research interests include Ad Hoc Sensor Network, Cloud Computing and Applications, Data Mining, Mobile Computing, Information processing, Wireless technology, ICT Convergence, Energy optimization. He is the Technical Reviewer of the various international journals.

Siam Review

He is a Life Member of International Association of Engineers (IACSIT), (Internet Society), (IAENG), (ISTE) and MIETES.