Most developers would say that a dynamic language (like JS) does not have types. Let's see what the ES5.1 specification has to say on the topic:
Algorithms within this specification manipulate values each of which has an associated type. The possible value types are exactly those defined in this clause. Types are further sub classified into ECMAScript language types and specification types.
An ECMAScript language type corresponds to values that are directly manipulated by an ECMAScript programmer using the ECMAScript language. The ECMAScript language types are Undefined, Null, Boolean, String, Number, and Object.
Now, if you're a fan of strongly-typed (statically-typed) languages, you probably object to this usage of the word "type". In those languages, "type" means a whole lot more than it does here in JS.
Some people say JS shouldn't claim to have "types", and they should instead be called "tags" or perhaps "sub types".
Bah. We're going to use this rough definition (the same one that seems to drive the wording of the spec!): a type is an intrinsic, built-in set of characteristics that uniquely identifies the behavior of a particular value and distinguishes it from other values, both to the engine and to the developer.
In other words, if both the engine and the developer treat value 42
(the number) differently than they treat value "42"
(the string), then those two values have different types -- number
and string
, respectively. When you use 42
, you are intending to do something numeric, like math. But when you use "42"
, you are intending to do something string'ish, like outputting to the page, etc. These two values have different types.
That's by no means a perfect definition. But it's good enough for us. And it's consistent with how JS describes itself.
In JavaScript, variables don't have types -- values have types. Variables can hold any value, at any time.
Another way to think about JS types is that JS has types but it doesn't have "type enforcement", in that the engine doesn't insist that a variable always holds values of the same initial type. A variable can, in one assignment statement, hold a string
, and in the next hold a number
, and so on.
Also, the value 42
has an intrinsic type of number
, and its type cannot be changed. Another value, like "42"
with the string
type, can be created from the number
value 42
, through a process called coercion, which we will cover later.
JavaScript defines 7 built-in types, which we often call "primitives". These are:
null
undefined
boolean
number
string
object
symbol
The typeof
operator inspects the type of the given value, and always returns one of 7 string values (though, strangely, there's not an exact 1-to-1 match with the 7 primitive types we just listed -- see below!).
typeof undefined === "undefined"; // true
typeof true === "boolean"; // true
typeof 42 === "number"; // true
typeof "42" === "string"; // true
typeof { life: 42 } === "object"; // true
// added in ES6!
typeof Symbol() === "symbol"; // true
These 6 listed types have values of the corresponding type and return a string of the same name, as shown. Symbol
is a new data type as of ES6, and will be covered later.
As you may have noticed, I excluded null
from the above listing. It's special -- special in the sense that it's buggy.
typeof null === "object"; // true
It would have been nice (and correct!) if it returned "null"
, but this original bug in JS has persisted for nearly 2 decades, and will likely never be fixed because there's too much existing web content that relies on its buggy behavior that "fixing" the bug would create "more bugs" and break a lot of web software.
So what's the seventh string value that typeof
can return? And why is it not actually a real primitive type?
typeof function a(){ /* .. */ } === "function"; // true
It's easy to think that function
would be a primitive type in JS, especially given this behavior of the typeof
operator. However, if you read the spec, you'll see it's actually somewhat of a "sub-type" of object. Specifically, a function is referred to as a "callable object" -- an object that has an internal [[Call]]
property that allows it to be invoked.
What about arrays? They're pretty native to JS, so are they a special type?
typeof [1,2,3] === "object"; // true
Nope, just objects. It's most appropriate to think of them also as a "sub-type" of object, in this case with the additional characteristics of being numerically indexed (as opposed to just being string-keyed like plain objects) and maintaining an automatically updated .length
property.
If you use typeof
against a variable, it's not asking "what's the type of the variable?" as it may seem, since (as we said above) JS variables have no types. Instead, it's asking "what's the type of the value in the variable?"
var a = 42;
typeof a; // "number"
a = true;
typeof a; // "boolean"
The typeof
operator always returns a string. So:
typeof typeof 42; // "string"
The first typeof 42
returns "number"
, and then typeof "number"
is "string"
.
Values that are the typeof
of "object"
(such as an array) are additionally tagged with an internal [[Class]]
property (think of this more as an internal classification rather than related to classes as in class-oriented coding). This property cannot be accessed directly, but can generally can be revealed indirectly by borrowing the default Object.prototype.toString(..)
called against the value. For example:
Object.prototype.toString.call( [1,2,3] ); // "[object Array]"
Object.prototype.toString.call( /regex-literal/i ); // "[object RegExp]"
So, for the array in this example, the internal [[Class]]
is Array
, and for the regular expression, it's RegExp
. In most cases, this internal ``[Class]]` value corresponds to the built-in native (see below) that's related to the value, but that's not always the case.
What about primitive values? First, null
and undefined
:
Object.prototype.toString.call( null ); // "[object Null]"
Object.prototype.toString.call( undefined ); // "[object Undefined]"
You'll note that there is no Null()
or Undefined()
native constructors, but nevertheless the Null
and Undefined
are the internal [[Class]]
values as exposed.
But for the other scalar primitives like string
, number
, and boolean
, another behavior actually kicks in, which is usually called "boxing".
Object.prototype.toString.call( "abc" ); // "[object String]"
Object.prototype.toString.call( 42 ); // "[object Number]"
Object.prototype.toString.call( true ); // "[object Boolean]"
In this snippet, each of the scalar primitives are automatically boxed by (aka coerced to -- see Chapter 2) their respective object wrappers, which is why String
, Number
, and Boolean
are revealed as the internal [[Class]]
values.
There are several special values spread across the various types which the alert JS developer needs to be aware of, and use properly.
For the undefined
type, there is one and only one value: undefined
. For the null
type, there is one and only one value: null
. So for both of them, the label is both its type and its value.
Both undefined
and null
are often taken to be interchangeable as either "empty" values or "non" values. Other developers prefer to distinguish between them with nuance, like for instance:
null
is an empty valueundefined
is a missing value
Or:
undefined
hasn't had a value yetnull
had a value and doesn't anymore
Regardless of how you choose to "define" and use these two values, null
is a special keyword, not an identifier, and thus you cannot treat it as a variable to assign to (why would you!?). However, undefined
is (unfortunately) an identifier. Uh oh.
In non-strict mode
, it's actually possible (though incredibly ill-advised!) to assign a value to the globally provided undefined
identifier:
function foo() {
undefined = 2; // really bad idea!
}
foo();
function foo() {
"use strict";
undefined = 2; // TypeError!
}
foo();
In both non-strict mode
and strict mode
, however, you can create a local variable of the name undefined
. But again, this is a terrible idea!
function foo() {
"use strict";
var undefined = 2;
console.log( undefined ); // 2
}
foo();
Friends don't let friends override undefined
. Ever.
While undefined
is a built-in identifier that holds (unless modified -- see above!) the built-in undefined
value, another way to get this value is the void
operator.
The expression void ___
"voids" out any value, so that the result of that void
-expression is always the undefined
value. It doesn't modify the existing value; it just ensures that no value comes back from the operator expression.
var a = 42;
console.log(void a, a); // undefined 42
By convention (mostly from C-language programming), to represent the undefined
value stand-alone by using void
, you'd use void 0
(though clearly even void true
or any other void
-expression does the same thing). There's no practical difference between void 0
and void 1
and undefined
.
But void
can be useful in a few other circumstances, if you need to ensure that an expression has no result value (even if it has side effects).
For example:
function doSomething() {
// note: `APP.ready` is provided by our application
if (!APP.ready) {
// try again later
return void setTimeout(doSomething,100);
}
var result;
// do some other stuff
return result;
}
// were we able to do it right away?
if (doSomething()) {
// handle next tasks right away
}
Here, the setTimeout(..)
function returns a numeric value, but we want to void
that out so that the return value of our function doesn't give a false-positive to the if
statement.
Many devs prefer to just do something like this, which works the same but doesn't use the void
operator:
if (!APP.ready) {
// try again later
setTimeout(doSomething,100);
return;
}
Variables which have no value currently actually have the undefined
value. Calling typeof
against such variables will return "undefined"
:
var a;
typeof a; // "undefined"
var b = 42;
b = void 0;
typeof b; // "undefined"
It's tempting for most developers to think of the name "undefined" and think of it as a synonym for "undeclared". However, in JS, these two concepts are quite different.
An "undefined" variable is one that has been declared in the accessible scope, but at the moment has no other value in it. By contrast, an "undeclared" variable is one which has not been formally declared in the accessible scope.
Consider:
var a;
a; // undefined
b; // ReferenceError: b is not defined
An annoying confusion is the error message that browsers assign to this condition. As you can see, the message is "b is not defined", which is of course very easy and reasonable to confuse with "b is undefined". Yet again, "undefined" and "is not defined" are very different things. It'd be nice if the browsers said something like "b is not found" or "b is not declared".
There's also a special behavior associated with typeof
as it relates to undeclared variables that even further reinforces the confusion. Consider:
var a;
typeof a; // "undefined"
typeof b; // "undefined"
The typeof
operator returns "undefined"
even for "undeclared" (or "not defined") variables. Notice that there was no error thrown when we executed typeof b
, even though b
is an undeclared variable. This is a special safety guard in the behavior of typeof
.
The number
type includes several special values. We'll take a look at each in detail.
Any mathematic operation you perform without both operands being numbers (or values that can be interpreted as regular numbers in base 10 or base 16) will result in the operation failing to produce a valid number, in which case you will get the NaN
value.
NaN
literally stands for "not a number", though this label/description is very poor and misleading, as we'll see shortly. It would be much more accurate to think of NaN
as being "invalid number", "failed number", or even "bad number", than to think of it as "not a number".
For example:
var a = 2 / "foo"; // NaN
typeof a === "number"; // true
In other words: "the type of not-a-number is 'number'!" Hooray for confusing names and semantics.
NaN
is a kind of "sentinel value" (an otherwise normal value that's assigned a special meaning) that represents a special kind of error condition within the number set. The error condition is, in essence: "I tried to perform a mathematic operation but failed, so here's the failed number result instead."
So, if you have a value in some variable and want to test to see if it's this special failed-number NaN
, you might think you could directly compare to NaN
itself, as you can with any other value, like null
or undefined
. Nope.
var a = 2 / "foo";
a == NaN; // false
a === NaN; // false
NaN
is a very special value in that it's never equal to another NaN
value (aka, it's not equal to itself). It's the only number in fact without the Identity characteristic x === x
. So, NaN !== NaN
. A bit strange, huh?
So how do we test for it, if we can't compare to NaN
(since that comparison would always fail)?
var a = 2 / "foo";
isNaN( a ); // true
Easy enough, right? We use a built-in utility called isNaN(..)
and it tells us if the value is NaN
or not. Problem solved!
Not so fast.
The built-in isNaN(..)
utility (which is technically window.isNaN(..)
) has a fatal flaw. It appears it tried to take the name of NaN
("not a number") too literally -- that its job is, basically: "test if the thing passed in is either not a number or is a number."
var a = 2 / "foo";
var b = "foo";
a; // NaN
b; "foo"
window.isNaN( a ); // true
window.isNaN( b ); // true -- ouch!
Clearly, "foo"
is not a number, but it's definitely not the NaN
value either. This bug has been in JS since the very beginning (so, over 19 years of ouch).
As of ES6, finally a replacement utility has been provided, with Number.isNaN(..)
. A simple polyfill for it so that you can safely check NaN
values now in pre-ES6 browsers is:
if (!Number.isNaN) {
Number.isNaN = function(n) {
return (
typeof n === "number" &&
window.isNaN( n )
);
};
}
var a = 2 / "foo";
var b = "foo";
Number.isNaN( a ); // true
Number.isNaN( b ); // false -- phew!
Actually, we can implement a Number.isNaN(..)
polyfill even easier, by taking advantage of that peculiar fact that NaN
isn't equal to itself. NaN
is the only value in the whole language where that's true; every other value is always equal to itself.
So:
if (!Number.isNaN) {
Number.isNaN = function(n) {
return n !== n;
};
}
Weird, huh? But it works!
NaN
s are probably a reality in a lot of real-world JS programs, either on purpose or by accident. It's a really good idea to use a reliable test, like Number.isNaN(..)
as provided (or polyfilled), to recognize them properly.
If you're currently using just isNaN(..)
in a program, the sad reality is your program has a bug, even if you haven't been bitten by it yet!
Developers from traditional compiled languages like C are probably used to seeing either a compiler error or run-time exception, like "Divide by zero", for an operation like:
var a = 1 / 0;
However, in JS, this operation is well-defined and results in the value Infinity
. Unsurprisingly:
var a = 1 / 0; // Infinity
var b = -1 / 0; // -Infinity
As you can see, -Infinity
results from a divide-by-zero where either (but not both!) of the divide operands is negative.
JS uses finite number representations (IEEE-754 foating point, which will be covered later), so contrary to pure mathematics, it seems it is possible to overflow (or underflow) even with an operation like addition or subtraction, in which case you'd respectively get Infinity
or -Infinity
.
For example:
var a = Number.MAX_VALUE; // 1.7976931348623157e+308
a + a; // Infinity
a + 1E292; // Infinity
a + 1E291; // 1.7976931348623157e+308
If you think too much about that, it's going to make your head hurt. So don't. We'll cover more of the specifics of IEEE-754 numbers and how they work later.
Once you overflow or underflow to either one of the infinities, however, there's no going back. In other words, in an almost poetic sense, you can go from finite to infinite but not from infinite back to finite.
It's almost philosophical to ask: "What is Infinity divided by Infinity". Our naive brains would likely say "1" or maybe "Infinity". Turns out neither is true. Both mathematically and in JavaScript, Infinity / Infinity
is not a defined operation. In JS, this results in NaN
as explained above.
But what about any positive non-infinite (that is, finite) number divided by infinity? That's easy! 0
. And what about a negative finite number divided by infinity? Keep reading!
While it may confuse the mathematician-minded reader, JavaScript has both a normal zero 0
(otherwise known as a positive zero +0
) and a negative zero -0
. Before we explain why the -0
exists, we should examine how JS handles it, because it can be quite confusing.
Besides being specified directly, negative zero results from certain mathematic operations. For example:
var a = 0 / -3; // -0
var b = 0 * -3; // -0
Addition and subtraction cannot result in a negative zero.
A negative zero when examined in the developer console will usually reveal -0
, though that was not the common case until fairly recently, so some older browsers may still report it as 0
.
However, if you try to stringify a negative zero value, it will always be reported as "0"
, according to the spec.
var a = 0 / -3;
// (some browser) consoles at least get it right
a; // -0
// but the spec insists on lying to you!
a.toString(); // "0"
a + ""; // "0"
String( a ); // "0"
// strangely, even JSON gets in on the deception
JSON.stringify( 0 / -3 ); // "0"
Interestingly, the reverse operations (going from string to number) don't lie:
+"-0"; // -0
Number( "-0" ); // -0
JSON.parse( "-0" ); // -0
Note: The JSON.stringify( -0 )
behavior is particularly strange when you consider the reverse: JSON.parse( "-0" )
, which indeed reports -0
as you'd correctly expect, despite the inconsistency with its inverse JSON.stringify(..)
.
In addition to stringification of negative zero being deceptive to hide its true value, the comparison operators are also (intentionally) configured to lie.
var a = 0;
var b = 0 / -3;
a == b; // true
-0 == 0; // true
a === b; // true
-0 === 0; // true
0 > -0; // false
a > b; // false
Clearly, if you want to distinguish a -0
from a 0
in your code, you can't just rely on what the developer console outputs, so you're going to have to be a bit more clever:
function isNegZero(n) {
n = Number( n );
return (n === 0) && (1 / n === -Infinity);
}
isNegZero( -0 ); // true
isNegZero( 0 / -3 ); // true
isNegZero( 0 ); // false
Now, why do we need a negative zero, besides academic trivia?
There are certain applications where developers use the magnitude of a value to represent one piece of information (like speed of movement per animation frame) and the sign of that number to represent another piece of information (like the direction of that movement).
In those applications, as one example, if a variable arrives at zero and it loses its sign, then you would lose the information of what direction it was moving in before it arrived at zero. Preserving the sign of the zero prevents potentially unwanted information loss.
We've already alluded to other built-ins, often called natives, like String
and Number
. Let's examine those in detail now.
Here's a list of the most commonly used natives:
String()
Number()
Boolean()
Array()
Object()
Function()
RegExp()
Date()
Error()
Symbol()
-- added in ES6!
As you can see, these natives are actually built-in functions.
If you're coming to JS from a language like Java, String()
will look like the "String constructor" you're used to for creating string values. So, you'll quickly observe that you can do things like:
var s = new String( "Hello World!" );
console.log( s ); // "Hello World"
It is true that each of these natives can be used as a native constructor. But what's being constructed may be different than you think.
var a = new String( "abc" );
typeof a; // "object" ... not "String"
a instanceof String; // true
Object.prototype.toString.call( a ); // "[object String]"
The result of the constructor form of value creation (new String("abc")
) is an object-wrapper around the primitive ("abc"
) value.
This object wrapper can further be observed with:
console.log( a );
The output of that statement varies depending on your browser, as developer consoles are free to choose however they feel it's appropriate to serialize the object for developer inspection.
For example, at time of writing, Chrome prints this: String {0: "a", 1: "b", 2: "c", length: 3, [[PrimitiveValue]]: "abc"}
. But Chrome used to just print this: String {0: "a", 1: "b", 2: "c"}
. Firefox currently prints "abc"
but it's in italics and is clickable to open the object inspector. Of course, these results are subject to change and your experience may vary.
These object wrappers serve a very important purpose. Primitive values don't have properties or methods, so to access .length
or .toString()
you need an object wrapper around the value. Thankfully, JS will automatically box (aka wrap) the value to fulfill such accesses.
var a = "abc";
a.length; // 3
a.toUpperCase(); // "ABC"
So, if you're going to be accessing these properties/methods on your string values regularly, like a i < a.length
condition in a for
loop for instance, it might seem to make sense to just have the object-form of the value from the start, so the JS engine doesn't need to implicitly create it for you.
But it turns out that's a bad idea. Browsers long ago performance-optimized the common cases like .length
, which means your program will actually go slower if you try to "pre-optimize" by directly using the object-form (which isn't on the optimized path).
In general, there's basically no reason to use the object-form directly. It's better to just let the boxing happen implicitly where necessary. In other words, never do things like new String("abc")
, new Number(42)
, etc -- always prefer using the literal primitive values "abc"
and 42
.
There are some gotchas with the object wrappers.
var a = new Boolean( false );
if (!a) {
console.log( "Oops" ); // never runs
}
The problem is that you've created an object wrapper around the false
value, but objects themselves are "truthy" (see Chapter 2), so using the object behaves oppositely to using the false
value itself, which is quite contrary to normal expectation.
For array
, object
, function
, and regular-expression primitives, it's almost universally preferred that you use the literal form for creating the values, but the literal form creates the same sort of object as the constructor form does (that is, there is no non-wrapped value).
Just as we've seen above with the other natives, these constructor forms should generally be avoided, unless you really know you need them, mostly because they introduce a lot of exceptions and gotchas that you don't really want to deal with.
var a = new Array( 1, 2, 3 );
a; // [1, 2, 3]
var b = [1, 2, 3];
b; // [1, 2, 3]
Note: The Array(..)
constructor does not require the new
keyword in front of it. If you omit it, it will behave as if you has used it anyway. So Array(1,2,3)
is the same outcome as new Array(1,2,3)
.
The Array
constructor has a special form that if there's only one argument passed, and it's a number
, instead of providing that value as contents of the array, it's taken as a length to "pre-size the array" (well, sorta).
This is a terrible idea. Firstly, you can trip over that form accidentally, as it's easy to forget.
But more importantly, there's no such thing as actually pre-sizing the array. Instead, what you're creating is an otherwise empty array, but setting the length
property of the array to the numeric value specified.
An array which has no explicit values in its slots, but it has a length
property that implies the slots exist, is a weird exotic type of data structure in JS with some very strange and confusing behavior. The capability to create such a value comes purely from old, deprecated, historical functionalities ("array-like objects" like the arguments
object).
It doesn't help matters that this is yet another example where browser developer consoles vary on how they represent such an object, which breeds more confusion.
For example:
var a = new Array( 3 );
a.length; // 3
a;
The serialization of a
in Chrome is (at time of writing): [ undefined x 3 ]
. This is really unfortunate. It implies that there are three undefined
values in the slots of this array, when in fact the slots do not exist (so called "empty slots" -- also a bad name!).
To visualize the difference, try this:
var a = new Array( 3 );
var b = [ undefined, undefined, undefined ];
var c = [];
c.length = 3;
a;
b;
c;
Note: As you can see with c
in this example, empty slots in an array can happen after creation of the array. Changing the length
of an array to go beyond its number of actually-defined slot values, you implicitly introduce empty slots. In fact, you could even call delete b[1]
in the above snippet, and it would introduce an empty slot into the middle of b
.
For b
(in Chrome, currently), you'll find [ undefined, undefined, undefined ]
as the serialization, as opposed to [ undefined x 3 ]
for a
and c
. Confused? Yeah, so is everyone else.
Worse than that, at time of writing, Firefox reports [ , , , ]
for a
and c
. Did you catch why that's so confusing? Look closely. Three commas implies four slots, not three slots like we'd expect.
What!? Firefox puts an extra ,
on the end of their serialization here because as of ES5, trailing commas in lists (arrays values, property lists, etc) are allowed (and thus dropped and ignored). So if you were to type in a [ , , , ]
value into your program or the console, you'd actually get the underlying value that's like [ , , ]
(that is, an array with three empty slots). This choice, while confusing if reading the developer console, is defended as instead making copy-n-paste behavior accurate.
If you're shaking your head or rolling your eyes about now, you're not alone! Shrugs.
Unfortunately, it gets worse. More than just confusing console output, a
and b
from the above code snippet actually behave the same in some cases but differently in others:
a.join( "-" ); // "--"
b.join( "-" ); // "--"
a.map(function(v,i){ return i; }); // [ undefined x 3 ]
b.map(function(v,i){ return i; }); // [ 0, 1, 2 ]
Ugh.
The a.map(..)
call fails because the slots don't actually exist, so map(..)
has nothing to iterate over. join(..)
works differently. Basically, we can think of it implemented sort of like this:
function fakeJoin(arr,connector) {
var str = "";
for (var i = 0; i < arr.length; i++) {
if (i > 0) {
str += connector;
}
if (arr[i] !== undefined) {
str += arr[i];
}
}
return str;
}
var a = new Array( 3 );
fakeJoin( a, "-" ); // "--"
As you can see, join(..)
works by just assuming the slots exist and looping up to the length
value. Whatever map(..)
does internally, it (apparently) doesn't make such an assumption, so the result from the strange "empty slots" array is unexpected and likely to cause failure.
So, if you wanted to actually create an array of actual undefined
values (not just "empty slots"), how could you do it (besides manually)?
var a = Array.apply( null, { length: 3 } );
a; // [ undefined, undefined, undefined ]
Confused? Yeah. Here's roughly how it works.
apply(..)
is a utility available to all functions, which calls the function it's used with but in a special way.
The first argument is a this
object binding (covered in the "this & Object Prototypes" title), which we don't care about here, so we set it to null
. The second argument is supposed to be an array (or something like an array -- aka an "array-like object"). The contents of this "array" are "spread" out as arguments to the function in question.
So, Array.apply(..)
is calling the Array(..)
function and spreading out the values (of the { length: 3 }
object value) as its arguments.
Inside of apply(..)
, we can envision there's another for
loop (kinda like join(..)
from above) that goes from 0
up to length
(3
in our case).
For each index, it retrieves that key from the object. So if the array-object parameter was named arr
internally inside of the apply(..)
function, the property access would effectively be arr[0]
, arr[1]
, and arr[2]
. Of course, none of those properties exist on the { length: 3 }
object value, so all three of those property accesses would return the value undefined
.
In other words, it ends up calling Array(..)
basically like this: Array(undefined,undefined,undefined)
, which is how we end up with an array filled with undefined
values, and not just those (crazy) empty slots.
While Array.apply( null, { length: 3 } )
is a strange and verbose way to create an array filled with undefined
values, it's vastly better and more reliable than what you get with the footgun'ish Array(3)
empty slots.
Bottom line: never ever, under any circumstances, should you intentionally create and use these exotic empty-slot arrays. Just don't do it. They're nuts.
The Object(..)
/ Function(..)
/ RegExp(..)
constructors are also generally optional (and thus should usually be avoided unless specifically called for):
var c = new Object();
c.foo = "bar";
c; // { foo: "bar" }
var d = { foo: "bar" };
d; // { foo: "bar" }
var e = new Function( "a", "return a * 2;" );
function f(a) { return a * 2; }
var g = new RegExp( "^a*b+", "g" );
var h = /^a*b+/g;
There's practically no reason to ever use the new Object()
constructor form, especially since it forces you to add properties one-by-one instead of many at once in the object literal form.
The Function
constructor is helpful only in the rarest of cases, where you need to dynamically define a function's parameters and/or its function body. Do not just treat Function(..)
as an alternate form of eval(..)
. You will almost never need to dynamically define a function in this way.
Regular expressions defined in the literal form (/^a*b+/g
) are strongly preferred, not just for ease of syntax but for performance reasons -- the JS engine pre-compiles and caches them before code execution. Unlike the other constructor forms we've seen so far, RegExp(..)
has some reasonable utility: to dynamically define the pattern for a regular expression.
var name = "Kyle";
var namePattern = new RegExp( "\\b(?:" + name + ")+\\b", "ig" );
var matches = someText.match( namePattern );
This kind of scenario legitimately occurs in JS programs from time to time, so you'd need to use the new RegExp("pattern","flags")
form.
The Date(..)
and Error(..)
native constructors are much more commonly useful than the other natives, because there is no primitive literal form for either.
To create a date object value, you must use new Date()
. The Date(..)
constructor accepts optional arguments to specify the date/time to use, but if omitted, the current date/time is assumed.
By far the most common reason you construct a date object is to get the current unix timestamp value (an integer number of seconds since Jan 1, 1970). You can do this by calling getTime()
on a date object instance.
An even easier way though is to just call the static helper function defined as of ES5: Date.now()
. And to polyfill that for pre-ES5 is pretty easy:
if (!Date.now) {
Date.now = function(){
return (new Date()).getTime();
};
}
Note: If you call Date()
without new
, you'll get back a string representation of the date/time at that moment. The exact form of this representation is not specified in the language spec, though browsers tend to agree on something close to: "Fri Jul 18 2014 00:31:02 GMT-0500 (CDT)"
.
The Error(..)
constructor (much like Array()
above) behaves the same with the new
keyword present or omitted.
The main reason you'd want to create an error object is that it captures the current execution stack context into the object (in most JS engines, revealed as a read-only .stack
property once constructed). This stack context includes the function call-stack and the line-number where the error object was created, which makes debugging that error much easier.
You would typically use such an error object with the throw
operator:
function foo(x) {
if (!x) {
throw new Error( "x wasn't provided" );
}
// ..
}
Error object instances generally have at least a message
property, and sometimes other properties (which you should treat as read-only), like type
. However, other than inspecting the above-mentioned stack
property, it's usually best to just call toString()
on the error object (either explicitly, or implicitly through coercion -- see Chapter 2), to get a friendly-formatted error message.
Note: Technically, in addition to the general Error(..)
native, there are several other specific-error-type natives: EvalError(..)
, RangeError(..)
, ReferenceError(..)
, SyntaxError(..)
, TypeError(..)
, and URIError(..)
. It's very rare to manually use these specific error natives, however. They are automatically used if your program actually suffers from a real exception (such as referencing an undeclared variable and getting a ReferenceError
error).
New as of ES6, an additional primitive value type has been added, called "Symbols". Symbols are special "unique" (not guaranteed!) values that can be used as properties on objects with little fear of any collision. They're primarily designed for special built-in behaviors of ES6 constructs, but you can also define your own symbols.
Symbols can be used as property names, but you cannot see or access the actual value of a Symbol from your program, nor from the developer console. You cannot convert it to a string (doing so results in a TypeError
being thrown), and if you output it to the developer console, what's shown is only a fake pseudo-serialization, like Symbol(Symbol.create)
.
There are several pre-defined symbols in ES6, accessed as static properties of the Symbol
function object, like Symbol.create
, Symbol.iterator
, etc. To use them, do something like:
obj[Symbol.iterator] = function(){ /*..*/ };
To define your own custom symbols, use the Symbol(..)
native. The Symbol(..)
native "constructor" is unique in that you're not allowed to use new
with it, as doing so will throw an error.
var mysym = Symbol( "my own symbol" );
mysym; // Symbol(my own symbol)
mysym.toString(); // Symbol(my own symbol)
typeof mysym; // "symbol"
var a = { };
a[mysym] = "foobar";
Object.getOwnPropertySymbols( a ); // [ Symbol(my own symbol) ]
While symbols are not private (Object.getOwnPropertySymbols(..)
reflects on the object and reveals the symbols), using them for private or special properties is their primary use-case. For most developers, they will probably take the place of property names with __
prefixes, which are almost always by convention signals to say, "hey, this is a private property, leave it alone!"