A full overview of HTML Canvas

See Example 1.

7Example 1.

7Canvas Demo — 1.

Shifted the line by 100px horizontally, 2.

Shifted the arc by (10,10) to bottom right.

var linearGrad = demoCanvas.

createLinearGradient(5,5,100,5);linearGrad.

addColorStop(0, "blue");linearGrad.

addColorStop(.

5, "green");linearGrad.

addColorStop(1, "red");demoCanvas.

strokeStyle=linearGrad;demoCanvas.

lineWidth=50;demoCanvas.

moveTo(50,5);demoCanvas.

lineTo(155,5);demoCanvas.

stroke();// change strokeStyle(l10) to fillStyle(l10)// and stroke() to fill().

Then, change lineTo(100,5) to rect(5,5,95,50).

// Results should be almost same.

demoCanvas.

closePath();demoCanvas.

beginPath();var radialGrad = demoCanvas.

createRadialGradient(50,50,10,50,50,40);radialGrad.

addColorStop(0, "blue");radialGrad.

addColorStop(.

5, "green");radialGrad.

addColorStop(1, "red");demoCanvas.

fillStyle=radialGrad;demoCanvas.

arc(60,60,30,0,2*Math.

PI,false);demoCanvas.

fill();Direct pixel manipulation & ImagesThe ImageData object can be used to manipulate individual pixels.

It has three properties:width : The width of the image data in device-display pixels.

height : The height of the image data in device-display pixels.

data : This is a Uint8ClampedArray (MDN doc here) which contains the individual pixel data in a series of (R,G,B,A) bytes for the top-most pixel to the bottom-right pixel.

So the nth pixel’s red value would be at data[y*width+x] , green would be at data[y*width+x+1] , blue would be at data[y*width+x+2] , and the alpha would be at data[y*width+x+3] .

NOTE: A RGBA value can be used to represent a color — where R,G,B are the amounts of red, green, and blue and A is the opacity (alpha value).

In the Canvas, these elements can have any integer value in [0, 255].

You can get a ImageData object with the following methods in the Canvas API:createImageData(sw,sh) : This creates an ImageData object of width and height sw and sh , defined in CSS pixels.

All the pixels will be initialized to transparent black (hex R,G,B=0, and A=255).

CSS pixels might map to a different number of actual device pixels exposed by the object itselfcreateImageData(data) : Copies the given image-data and returns the copy.

getImageData(sx,sy,sw,sh) : Returns a copy of the canvas’s pixels in the rectangle formed by sx,sy,sw,sh in a ImageData object.

Pixels outside the canvas are set to transparent black.

putImageData(imagedata,dx,dy,dirtyX,dirtyY,dirtyWidth,dirtyHeight): (The last four ‘dirty’ arguments are optional).

Copies the pixel values in imagedata into the canvas rectangle at dx,dy .

If you provide the last four arguments, it will only copy the dirty pixels in the image data (the rectangle formed at dirtyX,dirtyY of dimensions dirtyWidth*dirtyHeight ).

Not passing the last four arguments is the same as calling putImageData(imagedata,dx,dy,0,0,imagedata.

width,imagedata.

height).

For all integer values of x and y where dirtyX ≤ x < dirtyX+dirtyWidth and dirtyY ≤ y < dirtyY+dirtyHeight, copy the four channels of the pixel with coordinate (x, y) in the imagedata data structure to the pixel with coordinate (dx+x, dy+y) in the underlying pixel data of the canvas.

Example 1.

8:Canvas Demo 1.

8(a) — Randomized pixels in a 400×400 canvasI’ve filled the whole 400×400 canvas with random colors (fully opaque) using the getImageData/putImageData methods.

Note that using beginPath/closePath isn’t necessary to use the ImageData API — because your not using the Canvas API to form shapes/curves.

/* replace this line with demoCanvas.

createImageData(390,390) instead.

*/var rectData = demoCanvas.

getImageData(10, 10, 390, 390);for (var y=0; y<390; y++) { for (var x=0; x<390; x++) { const offset = 4*(y*390+x);// 4* because each pixel is 4 bytes rectData.

data[offset] = Math.

floor(Math.

random() * 256);// red rectData.

data[offset+1] = Math.

floor(Math.

random() * 256);// green rectData.

data[offset+2] = Math.

floor(Math.

random() * 256);// blue rectData.

data[offset+3] = 255;// alpha, fully opaque }}demoCanvas.

putImageData(rectData, 10, 10);/* beginPath/closePath aren't required for this code */Canvas Demo 1.

8(b) — x starts with a random value b/w 1 and y.

Canvas Demo 1.

8(c) — x ends at a random value greater than its initial value.

Images can be drawn onto the canvas directly.

The drawImage can be used in three different ways to do so.

It requires a CanvasImageSource as the pixel source.

A CanvasImageSource can be one of the following — HTMLImageElement, HTMLCanvasElement, HTMLVideoElement.

To copy into the canvas, you can use a <img style="display:none;" src=".

" /> .

You could also copy an existing canvas or the screenshot of a video!!!drawImage(image,dx,dy) : Copies the image-source into the canvas at (dx,dy).

The whole image is copied.

drawImage(image,dx,dy,dw,dh) : Copies the image-source into the rectangle in the canvas at (dx,dy) of size (dw,dh).

It will be scaled down or scaled up if necessary.

drawImage(image,sx,sy,sw,sh,dx,dy,dw,dh) : Copies the rectangle in the image source sx,sy,sw,sh into the rectangle in the canvas dx,dy,dw,dh and scales up or down if required.

However, if the rectangle sx,sy,sw,sh has parts outside the actual source — then the source rectangle is clipped to include the inbound parts and the destination rectangle is clipped in the same proportion; however, you shouldn’t pass any out-of-bounds rectangle — keep it simple, stupid.

Example 1.

9:Image copy examplevar image = document.

getElementById('game-img');demoCanvas.

drawImage(image, 50, 50, 200, 200, 100, 100, 200, 200);/* beginPath/closePath aren't required for this code */NOTE: Add this to your HTML —<img id="game-img" src="/path/to/your/image.

ext" style="display:none" />TransformationsNow we’re getting to the exciting parts of the Canvas API!!!The Canvas uses a transformation matrix to transform the input (x, y) coordinates into the displayed (x, y) coordinates.

Note that pixels drawn before the transformation are not transformed — they are untouched.

Only stuff drawn after applying the transformation will be changed.

There are three in-built transformation methods:scale(xf,yf) : This method scales the input by xf in the horizontal direction and yf in the vertical direction.

If you want to magnify an image by a factor of m , then pass xf=yf=m .

To stretch/squeeze an image horizontally by m , xf=m,yf=1 .

To stretch/squeeze an image vertically by m , xf=1,yf=m .

rotate(angle) : Rotates the input by an angle of angle in the clockwise direction, in radians.

translate(dx,dy) : Shifts the input by dx,dy .

Example 2.

0:Drawing a transformed image on top of the original image.

Scale=2,2; Rotate=30deg; Translate=10,10var image = document.

getElementById('game-img');demoCanvas.

drawImage(image, 0, 0, 400, 400);demoCanvas.

rotate(Math.

PI / 6);demoCanvas.

scale(2, 2);demoCanvas.

translate(10, 10);demoCanvas.

drawImage(image, 0, 0, 400, 400);In Example 2.

0, notice how the original image is intact.

Only the second image (overlay) is transformed by three methods — rotate, scale, transform.

To revert all transformations:demoCanvas.

setTransform(1, 0, 0, 0, 0, 1);// sets the transform to the identity matrixNOTE:Changing the order of transformation can affect the final result.

For advanced users, you may want to look at the transform and setTransform methods.

This will let you set the 3D transformation matrix directly.

getImageData and putImageData are not affected by the transform.

That means if you draw a black rectangle using putImageData , it won’t be transformed (rotated/scaled/translated).

As changing the transform only works for drawings done after applying it, you can’t scale/rotate/translate the existing canvas directly (nor does getImageData and then putImageData work).

You may have to create another hidden canvas of the same size — and then copy the image-data into the 2nd canvas, then use drawImage on the 2nd canvas.

Check this example: https://canvasdemo2d.

github.

io/ (source: https://github.

com/canvasdemo2d/canvasdemo2d.

github.

io).

Move your cursor over the canvas and see what it does.

It won’t work on mobile phones, unfortunately.

The cascading effect is due to the fact that I am translating the canvas w.

r.

t mouse using drawImage .

drawImage then writes to the same canvas it’s reading from, which causes the repeating pattern!Hit RegionsAs of the time of writing (March 2019), support for hit regions is experimental in Chrome and on Firefox.

Mobile browser don’t even support it at all.

Hence, I will explain to you “what” could hit regions be used for.

Hit regions are used to catch pointer events on the canvas and know “where” the user clicked.

For example, you could have two rectangles A & B — when the user clicks A, you want to perform action $A and when the user clicks B, you want to perform action $B.

Let’s walk through the whole process!A hit region is related to these three things:Path: The current path when the hit region was created (for example, a rectangle).

All pointer events inside the path are routed to that hit region.

Id: An unique id string to identify the hit region by the event handler.

Control: An alternative DOM element ( HTMLButtonElement , for example) that gets the pointer events instead.

NOTE: The path is automatically provided by the canvas when adding a new hit region.

Only one — id or control — is needed to form a hit region.

Methods for manipulating the hit-region list of a canvas are:addHitRegion(options) : Takes a HitRegionOptions object and forms a hit-region enclosed by the current path.

The options argument should be a string id property or a HTMLElement control property.

removeHitRegion(id) : Removes the hit region with the id id so that it no longer receives any pointer events.

clearHitRegions() : Removes all hit regions.

demoCanvas.

fillStyle = 'red';demoCanvas.

rect(10,10,60,60);demoCanvas.

fill();// first rectangledemoCanvas.

addHitRegion({ id: 'btn1' });demoCanvas.

fillStyle = 'blue';demoCanvas.

rect(10,110,60,60);demoCanvas.

fill();demoCanvas.

addHitRegion({ id: 'btn2' });document.

getElementById('demo-canvas').

onpointerdown = function(evt) {// demoCanvas is the 2d context, not the HTMLCanvasElement console.

log('Hello id: ' + evt.

region);// region is hitregion id}// This code might not work due to this being an// unsupported (new) feature of HTML5.

NOTE: Hit regions aren’t supported — but that doesn’t mean you have to use them to capture pointer events.

You could create your “own hit-region list” and representations of boundaries of regions (cause you can’t get the current path from the canvas, too bad).

In the document.

getElementById('demo-canvas').

onpointerdown method, get the current clientX,clientY properties and search through the hit region list.

Based on the hit region that contains the point, you can perform the intended action.

States and the clip() methodState saving is a convenience provided by the W3C specification.

You can save the current state of a canvas and restore it later.

You could also build such a system (partially) by writing your own JavaScript model.

But you would have to save a quite of stuff: transformation matrix, hit-region list, style properties, and so on.

Furthermore, you cannot revert the clipping area (we’ll get to the clip method in some time) directly.

NOTE: The save / restore methods do not save & restore the actual drawing/pixels.

They only save other properties.

Hence, I would recommend heavily using the save & restore methods to go back and forth instead of erasing stuff on your own or making your own state-saving mechanism.

The CanvasRendering2DContext object has an associated state stack.

The save method will push the current canvas state onto that stack, while the restore method will pop the latest state from the stack.

The Clipping RegionThe clipping region is a specific region in which all drawings are to be done.

Obviously, by default, the clipping region is the rectangle is the whole canvas.

But you may want to draw in a specific region instead of the whole thing.

For example, you may want to draw the lower half of a star formed by multiple lineTo methods.

So, for example, let’s say you know how to draw a star in the canvas.

It touches all sides of the canvas.

But now you want to only display the lower half of the star.

In this scenario, you would:Save the state of the canvasClip the lower half regionDraw your star (as if on the whole canvas)Restore the canvas stateTo clip a region, you have to call the clip() method which does the following:The clip() method must create a new clipping region by calculating the intersection of the current clipping region and the area described by the path, using the non-zero winding number rule.

Open subpaths must be implicitly closed when computing the clipping region, without affecting the actual subpaths.

The new clipping region replaces the current clipping region.

When the context is initialized, the clipping region must be set to the rectangle with the top left corner at (0,0) and the width and height of the coordinate space.

— W3C Documentation for Canvas 2D ContextdemoCanvas.

save();demoCanvas.

rect(0, 200, 400, 200);// lower-half rectangle subpathdemoCanvas.

clip();/* star drawing method */demoCanvas.

restore();That’s all for now.

I will write an article on animations with the canvas and how to write a custom interface completely on the canvas.

Further reading:How to use Firebase for building Android multiplayer gamesHow to synchronize your game app across multiple Android devicesCircular Dependencies in JavaScriptShukant Pal is the creator of the Silcos kernel.

He is an avid learner and is now practicing advanced web application development.

He has hands-on experience with React and its ecosystem.

All quotations are taken from the W3C docs for Canvas 2D Context.

Hey, I’m Shukant Pal.

I am developing a lot of web applications in my free time.

Follow me on social media.

Shukant Kumar Pal (@ShukantP) | TwitterThe latest Tweets from Shukant Kumar Pal (@ShukantP).

OSS High Schoolertwitter.

com.. More details

Leave a Reply