ALTER TABLE tbl01 ADD col01 BIT NOT NULL DEFAULT 0
The schema change it taking so long because you are assigning a default value to the column during the change and enforcing that with a non-nullable column, and it has to populate the column for millions of rows, which is an incredibly expensive operation.
Alternative to making it faster would be to add it in as a nullable column with no default value and then perform an update in batches to assign default value for the column. After your update are done then you can apply another schema change to change the column to non-nullable and assign the default value.
Adding a ” or 0 instead of letting it nullable is performance intensive operation.
ALTER TABLE is most likely slow because of the update. Remember, since that is a single transaction, a huge log will be generated and your log is likely growing now and being constantly zero-ed out as it expands. However, it might also be slow because of ordinary contention: the statement may not be able to acquire the SCH-M lock on the table.
The worst thing you can do right now is “panic” and hard kill the thing. Let it run itself out.
- The longer the table row, the longer it will take.
- The more indexes you have on that table, the longer it will take.
- If you add a default value (which you did), it will take longer.
- If you have heavy usage on the server it will take longer.
- If you don’t lock that database or put it in single user mode, it will take longer.
You could also try dropping all nonclustered indexes on the table before making the change, and restoring them afterwards. Adding a column may well involve large-scale page splits or other low-level re-arrangements, and you could do without the overhead of updating nonclustered indexes while that is going on.
You could try performing each step of the operation in a separate batch, e.g.
ALTER TABLE tbl01 ADD col01 BIT NULL GO UPDATE tbl01 SET col01 = 0 GO ALTER TABLE tbl01 ALTER COLUMN col01 BIT NOT NULL GO ALTER TABLE tbl01 ADD DEFAULT 0 FOR col01 GO