tag:blogger.com,1999:blog-32277303.post2492376496826175685..comments2023-09-10T09:41:44.505-05:00Comments on Oracle by Madrid: Why is Varchar2 oversizing bad?Unknownnoreply@blogger.comBlogger8125tag:blogger.com,1999:blog-32277303.post-14144674661842628682009-08-28T01:04:46.804-05:002009-08-28T01:04:46.804-05:00I don't agree with the example, your first exa...I don't agree with the example, your first example data is retrieved from cache while second does not. <br /><br />I simulate same example in <a href="http://arjudba.blogspot.com/2009/08/does-oversize-of-datatype-varchar2.html" rel="nofollow"> http://arjudba.blogspot.com/2009/08/does-oversize-of-datatype-varchar2.html</a><br /><br />There in fact no performance problemAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-32277303.post-80964093404187475402008-06-10T11:07:00.000-05:002008-06-10T11:07:00.000-05:00Jonathan, I will check it. The environment I used...Jonathan, I will check it. The environment I used was a regular 10.2.0.1.0 database with LTM, 8K blocksize, and default storage parameters.<BR/><BR/>The procedure to populate tables was by means of Insert As Select procedures taking the data from the same table. <BR/><BR/>Another fact I should point out, an undo issue, when I was performing the insert procedure it was possible to insert a higher amount of data at the short sized varchar table, meanwhile the procedure ran out of undo space for the long sized varchar table.Hector R. Madridhttps://www.blogger.com/profile/01618317885593239437noreply@blogger.comtag:blogger.com,1999:blog-32277303.post-37272899212244609242008-06-03T16:48:00.000-05:002008-06-03T16:48:00.000-05:00I can't reproduce your results on 10.2.0.3 using 8...I can't reproduce your results on 10.2.0.3 using 8K blocks, LMT with autoallocate and ASSM. <BR/><BR/>What was your tablespace definition, table pctfree, and how did you populate the table ?<BR/><BR/>Since you seem to have two tables of 55,000 blocks and have only reported 22,000 blocks read for one of tests, it looks as if either you had 33,000 blocks buffered or your version of Oracle has done asynchronous readahead for one of your tests and not counted the reads as physicals. <BR/><BR/>Can you reproduce the tests and use Tanel Poder's 'snapper' (or similar) to get the session's v$sesstat and v$session_event entries.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-32277303.post-56267523440332845672008-06-01T21:54:00.000-05:002008-06-01T21:54:00.000-05:00Hi Hector,Nice article. By the way, in addition t...Hi Hector,<BR/><BR/>Nice article. By the way, in addition to Tonguç's comment about ORA-01450, maybe creating a function-based index could partially solve the "problem":<BR/><BR/>SQL> create table t (a varchar2(4000),b varchar2(4000));<BR/><BR/>Table created.<BR/><BR/>SQL> create index t_idx_ab <BR/> 2 on t(substr(a,1,100),substr(b,1,100));<BR/><BR/>Index created.<BR/><BR/>SQL> insert into t <BR/> 2 values ('aaaaaaaaaaaaaaa','bbbbbbbbbbbbbbbb');<BR/><BR/>1 row created.<BR/><BR/>SQL> analyze table t compute statistics;<BR/><BR/>Table analyzed.<BR/><BR/>SQL> explain plan for <BR/> 2 select * from t <BR/> 3 where a='aaaaaaaaaaaaaaa' and b='bbbbbbbbbbbbbbbb';<BR/><BR/>Explained.<BR/><BR/>SQL> select * from table(dbms_xplan.display);<BR/><BR/>TABLE ACCESS BY INDEX ROWID INDEX RANGE SCAN (T_IDX_AB)<BR/><BR/>Predicate Information (identified by operation id):<BR/>---------------------------------------------------<BR/><BR/>1 - filter("A"='aaaaaaaaaaa' AND "B"='bbbbbbbbbbbbbb')<BR/>2 - access(SUBSTR("A",1,100)='aaaaaaaaaaaaaaa' AND<BR/> SUBSTR("B",1,100)='bbbbbbbbbbbbbbbb')<BR/><BR/>Cheers<BR/><BR/>LegattiEduardo Legattihttps://www.blogger.com/profile/13417270391272587761noreply@blogger.comtag:blogger.com,1999:blog-32277303.post-44567496519798689942008-05-10T08:46:00.000-05:002008-05-10T08:46:00.000-05:00Actually I am not suggesting to use char datatype ...Actually I am not suggesting to use char datatype instead of varchar2, since the char datatype will reserve the amount of characters even if those are not used, leading to space wastage. My suggestion is even though varchar2 can be defined any size, it shouldn't be oversized, otherwise some performance and problems at execution time may arise.Hector R. Madridhttps://www.blogger.com/profile/01618317885593239437noreply@blogger.comtag:blogger.com,1999:blog-32277303.post-67942097421469628452008-05-09T13:50:00.000-05:002008-05-09T13:50:00.000-05:00Hi!so you are strongly recommending that we use ch...Hi!<BR/>so you are strongly recommending that we use char instead of varchar2 datatype. am I wright?Coolhttps://www.blogger.com/profile/01771449140999461666noreply@blogger.comtag:blogger.com,1999:blog-32277303.post-13095097312854196752008-05-08T18:33:00.000-05:002008-05-08T18:33:00.000-05:00Certainly. When I was performing the tests, there...Certainly. When I was performing the tests, there was an additional issue I didn't publish because of lack of database resources, and because I was interested in collecting operational statistics that were not able to be gathered because of this: when I was to perform another insert as select on the Varchar4000 table, inserting 8 million rows made the undo segment to overflow, meanwhile at the Varchar4 table it was able to keep on inserting up to 64 million rows without any problem at undo tablespace.Hector R. Madridhttps://www.blogger.com/profile/01618317885593239437noreply@blogger.comtag:blogger.com,1999:blog-32277303.post-31317265650130649532008-05-08T00:10:00.000-05:002008-05-08T00:10:00.000-05:00Hi Madrid, Once upon a time this was a question to...Hi Madrid, <BR/><BR/>Once upon a time this was a question to me from one of my colleagues; since VARCHAR2 datatype has flexibilities to store variable-length values for example lets say "why not storing a “human name” :) on a varchar2(4000) column, what can be the alternative costs since varchar2(40) is already enough for this kind of a need?"<BR/><BR/>Besides row chaining and migrating, %ROWTYPE and %TYPE usage of PL/SQL is another threat in terms of memory consumption;<BR/><BR/><A HREF="http://tonguc.yilmaz.googlepages.com/Code_listing-73-Memory_cost_analysis.txt" REL="nofollow">http://tonguc.yilmaz.googlepages.com/Code_listing-73-Memory_cost_analysis.txt</A><BR/><BR/>Also there are several limitations too;<BR/><BR/>create table t ( a varchar2(4000), b varchar2(4000) );<BR/>create index t_idx_ab on t(a, b);<BR/><BR/>create index t_idx_ab on t(a, b)<BR/>*<BR/>ERROR at line 1:<BR/>ORA-01450: maximum key length (6398) exceeded<BR/><BR/>The alternative CHAR datatype stores fixed-length character strings, since Oracle compares CHAR values using blank-padded comparison semantics where as Oracle compares VARCHAR2 values using nonpadded comparison semantics this also confuses developers a lot and motivates them to use VARCHAR2 over CHAR2 I believe.<BR/><BR/>Best regards.Tonguçhttps://www.blogger.com/profile/10296956337243833265noreply@blogger.com