Apparently the same space consumption would be the same if a Varchar2(4) is used versus a Varchar2(4000), if you require one byte, then one byte will be allocated, and performance should be the same, but when taking a look at the performance statistics there are some surprises that arise.
When fetching information from Oracle, buffers and fetching structures are being prepared beforehand to be able to manage the maximum column lenght, applications such as toad may run out of memory trying to allocate resources to retrieve the information from an unnecessarily oversized varchar2 column.
Let's analyze this case:
Table 1: Varchar2LenghtDemo
SQL> desc Varchar2LenghtDemo
Name Null? Type
----------------------------------------- -------- ----------------------
ID NUMBER(10)
VCCOL VARCHAR2(4000)
VCCOL2 VARCHAR2(4000)
VCCOL3 VARCHAR2(4000)
VCCOL4 VARCHAR2(4000)
VCCOL5 VARCHAR2(4000)
VCCOL6 VARCHAR2(4000)
VCCOL7 VARCHAR2(4000)
VCCOL8 VARCHAR2(4000)
VCCOL9 VARCHAR2(4000)
VCCOL10 VARCHAR2(4000)
VCCOL11 VARCHAR2(4000)
VCCOL12 VARCHAR2(4000)
VCCOL13 VARCHAR2(4000)
VCCOL14 VARCHAR2(4000)
VCCOL15 VARCHAR2(4000)
VCCOL16 VARCHAR2(4000)
VCCOL17 VARCHAR2(4000)
VCCOL18 VARCHAR2(4000)
VCCOL19 VARCHAR2(4000)
VCCOL20 VARCHAR2(4000)
23:39:55 SQL> select count(*) from Varchar2LenghtDemo;
COUNT(*)
----------
8388608
Elapsed: 00:00:10.18
Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
55719 consistent gets
55669 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
If we compare the statistics from one table and the other:
SQL> desc Varchar2LenghtDemoShort
Name Null? Type
----------------------------------------- -------- --------------------
ID NUMBER(10)
VCCOL VARCHAR2(4)
VCCOL2 VARCHAR2(4)
VCCOL3 VARCHAR2(4)
VCCOL4 VARCHAR2(4)
VCCOL5 VARCHAR2(4)
VCCOL6 VARCHAR2(4)
VCCOL7 VARCHAR2(4)
VCCOL8 VARCHAR2(4)
VCCOL9 VARCHAR2(4)
VCCOL10 VARCHAR2(4)
VCCOL11 VARCHAR2(4)
VCCOL12 VARCHAR2(4)
VCCOL13 VARCHAR2(4)
VCCOL14 VARCHAR2(4)
VCCOL15 VARCHAR2(4)
VCCOL16 VARCHAR2(4)
VCCOL17 VARCHAR2(4)
VCCOL18 VARCHAR2(4)
VCCOL19 VARCHAR2(4)
VCCOL20 VARCHAR2(4)
23:43:02 SQL> r
1* select count(*) from Varchar2LenghtDemoShort
COUNT(*)
----------
8388608
Elapsed: 00:00:05.50
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
55756 consistent gets
22077 physical reads
0 redo size
414 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Almost all statistics are the same, except for these two:
Operation: Select count(*) from table
Table Rows Time Elapsed Phy Rds
Varchar(4) 8,388,608 00:00:05.50 22,077
Varchar(4000) 8,388,608 00:00:10.18 55,669
Since the number of physical reads is meaningful different a question arises:
How many rows per block? Is the row density the same?
Table Avg Rows per Block
Varchar(4) 150.99
Varchar(4000) 150.99
Quite surprising, Row density is the same, but physical blocks processed are doubled.
Conclusion
Even though Varchar2 columns allow a variable number of characters and it will just use the number of bytes required, it is absolutely better to properly size the actual requirements, otherwise a meaningful performance issue unnecessarily will be created.