threads
listlengths
1
275
[ { "msg_contents": "\n\n", "msg_date": "Tue, 10 Sep 2002 08:40:44 +0200", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n", "msg_date": "Tue, 10 Sep 2002 13:31:10 +0100", "msg_from": "\"Gavin Love\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "\n", "msg_date": "Tue, 10 Sep 2002 11:20:23 -0400", "msg_from": "Bryan White <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Tue, 10 Sep 2002 10:07:06 -0600", "msg_from": "Jason k Larson <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Tue, 10 Sep 2002 10:00:22 -0700", "msg_from": "Ericson Smith <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "\n-- \n\nKeith Gray\nTechnical Services Manager\nHeart Consulting Services\n\n", "msg_date": "Wed, 11 Sep 2002 12:05:24 +1000", "msg_from": "Keith Gray <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n", "msg_date": "11 Sep 2002 11:00:03 -0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "subscribe\n\n\n------------------------------------------------------------------------------\n > Khusus Pelanggan Telepon DIVRE 2, Tekan 166 untuk mendengarkan pesan Anda\n ------------------------------------------------------------------------------\n", "msg_date": "Wed, 11 Sep 2002 18:05:35 +0700", "msg_from": "\"kopra\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Hi everyone,\n\nThere are PostgreSQL servers around that are handling 2,000 simultaneous\nclient connections (in real life) without problems, but no-one obvious\nseems to have yet taken the time to do fine grained testing of the\nservers which can take this kind of load, to accurately model their\nperformance characteristics.\n\nDoes anyone here happen to have fine grained benchmark/performance\nfigures hanging around which get into this range of performance? \nPreferably with pretty precise details of how the system was configured,\netc.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 14 Sep 2002 10:15:59 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone have any find grained benchmark data?" } ]
[ { "msg_contents": "subscribe\n\n\n------------------------------------------------------------------------------\n > Khusus Pelanggan Telepon DIVRE 2, Tekan 166 untuk mendengarkan pesan Anda\n ------------------------------------------------------------------------------\n", "msg_date": "Sat, 14 Sep 2002 19:28:03 +0700", "msg_from": "\"kopra\" <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "Hello all,\n\nSome time back I posted a query to build a site with 150GB of database. In last \ncouple of weeks, lots of things were tested at my place and there are some \nresults and again some concerns. \n\nThis is a long post. Please be patient and read thr. If we win this, I guess we \nhave a good marketing/advocacy case here..;-)\n\nFirst the problems (For those who do not read beyond first page)\n\n1) Database load time from flat file using copy is very high\n2) Creating index takes huge amount of time.\n3) Any suggsestions for runtime as data load and query will be going in \nparallel.\n\nNow the details. Note that this is a test run only..\n\nPlatform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\nRedHat7.2/PostgreSQL7.1.3\n\nDatabase in flat file: \n125,000,000 records of around 100 bytes each. \nFlat file size 12GB\n\nLoad time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\nCreate unique composite index on 2 char and a timestamp field: 25226 sec.\nDatabase size on disk: 26GB\nSelect query: 1.5 sec. for approx. 150 rows.\n\nImportant postgresql.conf settings\n\nsort_mem = 12000\nshared_buffers = 24000\nfsync=true (Sad but true. Left untouched.. Will that make a difference on \nSCSI?)\nwal_buffers = 65536 \nwal_files = 64 \n\nNow the requirements\n\nInitial flat data load: 250GB of data. This has gone up since last query. It \nwas 150GB earlier..\nOngoing inserts: 5000/sec. \nNumber of queries: 4800 queries/hour\nQuery response time: 10 sec.\n\n\nNow questions.\n\n1) Instead of copying from a single 12GB data file, will a parallel copy from \nsay 5 files will speed up the things? \n\nCouple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \nsetup..\n\n2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \naddition to improve create index performance?\n\n3) 5K concurrent inserts with an index on, will this need a additional CPU \npower? Like deploying it on dual RISC CPUs etc? \n\n4) Query performance is not a problem. Though 4.8K queries per sec. expected \nresponse time from each query is 10 sec. But my guess is some serius CPU power \nwill be chewed there too..\n\n5)Will upgrading to 7.2.2/7.3 beta help?\n\nAll in all, in the test, we didn't see the performance where hardware is \nsaturated to it's limits. So effectively we are not able to get postgresql \nmaking use of it. Just pushing WAL and shared buffers does not seem to be the \nsolution.\n\nIf you guys have any suggestions. let me know. I need them all..\n\nMysql is almost out because it's creating index for last 17 hours. I don't \nthink it will keep up with 5K inserts per sec. with index. SAP DB is under \nevaluation too. But postgresql is most favourite as of now because it works. So \nI need to come up with solutions to problems that will occur in near future..\n;-)\n\nTIA..\n\nBye\n Shridhar\n\n--\nLaw of Procrastination:\tProcrastination avoids boredom; one never has\tthe \nfeeling that there is nothing important to do.\n\n", "msg_date": "Thu, 26 Sep 2002 14:05:44 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:\n> Some time back I posted a query to build a site with 150GB of database. In last \n> couple of weeks, lots of things were tested at my place and there are some \n> results and again some concerns. \n\n> 2) Creating index takes huge amount of time.\n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n\n> 2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \n> addition to improve create index performance?\n\nJust a thought. If I sort the table before making an index, would it be faster \nthan creating index on raw table? And/or if at all, how do I sort the table \nwithout duplicating it?\n\nJust a wild thought..\n\nBye\n Shridhar\n\n--\nlinux: the choice of a GNU generation([email protected] put this on Tshirts in \n'93)\n\n", "msg_date": "Thu, 26 Sep 2002 14:24:02 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I'll preface this by saying that while I have a large database, it doesn't\nrequire quite the performace you're talking about here.\n\nOn Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:\n> 1) Database load time from flat file using copy is very high\n> 2) Creating index takes huge amount of time.\n> 3) Any suggsestions for runtime as data load and query will be going in \n> parallel.\n\nYou're loading all the data in one copy. I find that INSERTs are mostly\nlimited by indexes. While index lookups are cheap, they are not free and\neach index needs to be updated for each row.\n\nI fond using partial indexes to only index the rows you actually use can\nhelp with the loading. It's a bit obscure though.\n\nAs for parallel loading, you'll be limited mostly by your I/O bandwidth.\nHave you measured it to take sure it's up to speed?\n\n> Now the details. Note that this is a test run only..\n> \n> Platform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\n> RedHat7.2/PostgreSQL7.1.3\n> \n> Database in flat file: \n> 125,000,000 records of around 100 bytes each. \n> Flat file size 12GB\n> \n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n\nSo you're loading at a rate of 860KB per sec. That's not too fast. How many\nindexes are active at that time? Triggers and foreign keys also take their\ntoll.\n\n> Important postgresql.conf settings\n> \n> sort_mem = 12000\n> shared_buffers = 24000\n> fsync=true (Sad but true. Left untouched.. Will that make a difference on \n> SCSI?)\n> wal_buffers = 65536 \n> wal_files = 64 \n\nfsync IIRC only affects the WAL buffers now but it may be quite expensive,\nespecially considering it's running on every transaction commit. Oh, your\nWAL files are on a seperate disk from the data?\n\n> Initial flat data load: 250GB of data. This has gone up since last query. It \n> was 150GB earlier..\n> Ongoing inserts: 5000/sec. \n> Number of queries: 4800 queries/hour\n> Query response time: 10 sec.\n\nThat looks quite acheivable.\n\n> 1) Instead of copying from a single 12GB data file, will a parallel copy from \n> say 5 files will speed up the things? \n\nLimited by I/O bandwidth. On linux vmstat can tell you how many blocks are\nbeing loaded and stored per second. Try it. As long as sync() doesn't get\ndone too often, it should be help.\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> setup..\n\nNo, it's not. You should be able to do better.\n\n> 2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \n> addition to improve create index performance?\n\nShould be fine. Admittedly your indexes are taking rather long to build.\n\n> 3) 5K concurrent inserts with an index on, will this need a additional CPU \n> power? Like deploying it on dual RISC CPUs etc? \n\nIt shouldn't. Do you have an idea of what your CPU usage is? ps aux should\ngive you a decent idea.\n\n> 4) Query performance is not a problem. Though 4.8K queries per sec. expected \n> response time from each query is 10 sec. But my guess is some serius CPU power \n> will be chewed there too..\n\nShould be fine.\n\n> 5)Will upgrading to 7.2.2/7.3 beta help?\n\nPossibly, though it may be wirth it just for the features/bugfixes.\n\n> All in all, in the test, we didn't see the performance where hardware is \n> saturated to it's limits. So effectively we are not able to get postgresql \n> making use of it. Just pushing WAL and shared buffers does not seem to be the \n> solution.\n> \n> If you guys have any suggestions. let me know. I need them all..\n\nFind the bottleneck: CPU, I/O or memory?\n\n> Mysql is almost out because it's creating index for last 17 hours. I don't \n> think it will keep up with 5K inserts per sec. with index. SAP DB is under \n> evaluation too. But postgresql is most favourite as of now because it works. So \n> I need to come up with solutions to problems that will occur in near future..\n> ;-)\n\n17 hours! Ouch. Either way, you should be able to do much better. Hope this\nhelps,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Thu, 26 Sep 2002 19:05:19 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 10:51, [email protected] wrote:\n\n> Hi,\n> it seems you have to cluster it, I don't think you have another choise.\n\nHmm.. That didn't occur to me...I guess some real time clustering like usogres \nwould do. Unless it turns out to be a performance hog..\n\nBut this is just insert and select. No updates no deletes(Unless customer makes \na 180 degree turn) So I doubt if clustering will help. At the most I can \nreplicate data across machines and spread queries on them. Replication overhead \nas a down side and low query load on each machine as upside..\n\n> I'm retrieving the configuration of our postgres servers (I'm out of office\n> now), so I can send it to you. I was quite disperate about performance, and\n> I was thinking to migrate the data on an oracle database. Then I found this\n> configuration on the net, and I had a succesfull increase of performance.\n\nIn this case, we are upto postgresql because we/our customer wants to keep the \ncosts down..:-) Even they are asking now if it's possible to keep hardware \ncosts down as well. That's getting some funny responses here but I digress..\n\n> Maybe this can help you.\n> \n> Why you use copy to insert records? I usually use perl scripts, and they\n> work well .\n\nPerformance reasons. As I said in one of my posts earlier, putting upto 100K \nrecords in one transaction in steps of 10K did not reach performance of copy. \nAs Tom said rightly, it was a 4-1 ratio despite using transactions..\n\nThanks once again..\nBye\n Shridhar\n\n--\nSecretary's Revenge:\tFiling almost everything under \"the\".\n\n", "msg_date": "Thu, 26 Sep 2002 14:43:20 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Hi Shridhar,\n\nShridhar Daithankar wrote:\n<snip>\n> 3) Any suggsestions for runtime as data load and query will be going in\n> parallel.\n\nThat sounds unusual. From reading this, it *sounds* like you'll be\nrunning queries against an incomplete dataset, or maybe just running the\nqueries that affect the tables loaded thus far (during the initial\nload).\n\n<snip>\n> fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> SCSI?)\n\nDefinitely. Have directly measured a ~ 2x tps throughput increase on\nFreeBSD when leaving fsync off whilst performance measuring stuff\nrecently (PG 7.2.2). Like anything it'll depend on workload, phase of\nmoon, etc, but it's a decent indicator.\n\n<snip>\n> Now questions.\n> \n> 1) Instead of copying from a single 12GB data file, will a parallel copy from\n> say 5 files will speed up the things?\n\nNot sure yet. Haven't get done enough performance testing (on the cards\nvery soon though).\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5\n> setup..\n\nfsync = off would help during the data load, but not a good idea if\nyou're going to be running queries against it at the same time.\n\nAm still getting the hang of performance tuning stuff. Have a bunch of\nUltra160 hardware for the Intel platform, and am testing against it as\ntime permits.\n\nNot as high end as I'd like, but it's a start.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip>\n> Bye\n> Shridhar\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:17:32 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 19:17, Justin Clift wrote:\n> Shridhar Daithankar wrote:\n> <snip>\n> > 3) Any suggsestions for runtime as data load and query will be going in\n> > parallel.\n> \n> That sounds unusual. From reading this, it *sounds* like you'll be\n> running queries against an incomplete dataset, or maybe just running the\n> queries that affect the tables loaded thus far (during the initial\n> load).\n\nThat's correct. Load the data so far and keep inserting data as and when it \ngenerates.\n\nThey don't mind running against data so far. It's not very accurate stuff \nIMO...\n\n> > fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> > SCSI?)\n> \n> Definitely. Have directly measured a ~ 2x tps throughput increase on\n> FreeBSD when leaving fsync off whilst performance measuring stuff\n> recently (PG 7.2.2). Like anything it'll depend on workload, phase of\n> moon, etc, but it's a decent indicator.\n\nI didn't know even that matters with SCSI..Will check out..\n\n> fsync = off would help during the data load, but not a good idea if\n> you're going to be running queries against it at the same time.\n\nThat's OK for the reasons mentioned above. It wouldn't be out of place to \nexpect a UPS to such an installation...\n\nBye\n Shridhar\n\n--\nHoare's Law of Large Problems:\tInside every large problem is a small problem \nstruggling to get out.\n\n", "msg_date": "Thu, 26 Sep 2002 15:05:40 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n\n> On Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:\n> > 1) Database load time from flat file using copy is very high\n> > 2) Creating index takes huge amount of time.\n> > 3) Any suggsestions for runtime as data load and query will be going in \n> > parallel.\n> \n> You're loading all the data in one copy. I find that INSERTs are mostly\n> limited by indexes. While index lookups are cheap, they are not free and\n> each index needs to be updated for each row.\n> \n> I fond using partial indexes to only index the rows you actually use can\n> help with the loading. It's a bit obscure though.\n> \n> As for parallel loading, you'll be limited mostly by your I/O bandwidth.\n> Have you measured it to take sure it's up to speed?\n\nWell. It's like this, as of now.. CreateDB->create table->create index->Select.\n\nSo loading is not slowed by index. As of your hint of vmstat, will check it \nout.\n> So you're loading at a rate of 860KB per sec. That's not too fast. How many\n> indexes are active at that time? Triggers and foreign keys also take their\n> toll.\n\nNothing except the table where data os loaded..\n\n> fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n> especially considering it's running on every transaction commit. Oh, your\n> WAL files are on a seperate disk from the data?\n\nNo. Same RAID 5 disks..\n\n> It shouldn't. Do you have an idea of what your CPU usage is? ps aux should\n> give you a decent idea.\n\nI guess we forgot to monitor system parameters. Next on my list is running \nvmstat, top and tuning bdflush.\n \n> Find the bottleneck: CPU, I/O or memory?\n\nUnderstood..\n> \n> > Mysql is almost out because it's creating index for last 17 hours. I don't \n> > think it will keep up with 5K inserts per sec. with index. SAP DB is under \n> > evaluation too. But postgresql is most favourite as of now because it works. So \n> > I need to come up with solutions to problems that will occur in near future..\n> > ;-)\n> \n> 17 hours! Ouch. Either way, you should be able to do much better. Hope this\n> helps,\n\nHeh.. no wonder this evaluation is taking more than 2 weeks.. Mysql was running \nout of disk space while creating index and crashin. An upgrade to mysql helped \nthere but no numbers as yet..\n\nThanks once again...\nBye\n Shridhar\n\n--\nBoren's Laws:\t(1) When in charge, ponder.\t(2) When in trouble, delegate.\t(3) \nWhen in doubt, mumble.\n\n", "msg_date": "Thu, 26 Sep 2002 15:16:50 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On Thursday 26 Sep 2002 9:35 am, Shridhar Daithankar wrote:\n\n[questions re: large database]\n\nBefore reading my advice please bear in mind you are operating way beyond the \nscale of anything I have ever built.\n\n> Now the details. Note that this is a test run only..\n>\n> Platform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\n> RedHat7.2/PostgreSQL7.1.3\n>\n> Database in flat file:\n> 125,000,000 records of around 100 bytes each.\n> Flat file size 12GB\n>\n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n>\n> Important postgresql.conf settings\n[snipped setting details for moment]\n\nHave you tried putting the wal files, syslog etc on separate disks/volumes? If \nyou've settled on Intel, about the only thing you can optimise further is the \ndisks.\n\nOh - and the OS - make sure you're running a (good) recent kernel for that \nsort of hardware, I seem to remember some substantial changes in the 2.4 \nseries regarding multi-processor.\n\n> Now the requirements\n>\n> Initial flat data load: 250GB of data. This has gone up since last query.\n> It was 150GB earlier..\n> Ongoing inserts: 5000/sec.\n> Number of queries: 4800 queries/hour\n> Query response time: 10 sec.\n\nIs this 5000 rows in say 500 transactions or 5000 insert transactions per \nsecond. How many concurrent clients is this? Similarly for the 4800 queries, \nhow many concurrent clients is this? Are they expected to return approx 150 \nrows as in your test?\n\n> Now questions.\n>\n> 1) Instead of copying from a single 12GB data file, will a parallel copy\n> from say 5 files will speed up the things?\n\nIf the CPU is the bottle-neck then it should, but it's difficult to say \nwithout figures.\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5\n> setup..\n\nWhat is saturating during the flat-file load? Something must be maxed in top / \niostat / vmstat.\n\n[snip]\n>\n> 5)Will upgrading to 7.2.2/7.3 beta help?\n\nIt's unlikely to hurt.\n\n> All in all, in the test, we didn't see the performance where hardware is\n> saturated to it's limits.\n\nSomething *must* be.\n\nWhat are your disaster recovery plans? I can see problems with taking backups \nif this beast is live 24/7.\n\n- Richard Huxton\n", "msg_date": "Thu, 26 Sep 2002 10:48:06 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> > > fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> > > SCSI?)\n> >\n> > Definitely. Have directly measured a ~ 2x tps throughput increase on\n> > FreeBSD when leaving fsync off whilst performance measuring stuff\n> > recently (PG 7.2.2). Like anything it'll depend on workload, phase of\n> > moon, etc, but it's a decent indicator.\n> \n> I didn't know even that matters with SCSI..Will check out..\n\nCool. When testing it had FreeBSD 4.6.2 installed on one drive along\nwith the PostgreSQL 7.2.2 binaries, it had the data on a second drive\n(mounted as /pgdata), and it had the pg_xlog directory mounted on a\nthird drive. Swap had it's own drive as well.\n\nEverything is UltraSCSI, etc. Haven't yet tested for a performance\ndifference through moving the indexes to another drive after creation\nthough. That apparently has the potential to help as well.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:49:53 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n> \n> On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n<snip>\n> > fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n> > especially considering it's running on every transaction commit. Oh, your\n> > WAL files are on a seperate disk from the data?\n> \n> No. Same RAID 5 disks..\n\nNot sure if this is a good idea. Would have to think deeply about the\ncontroller and drive optimisation/load characteristics.\n\nIf it's any help, when I was testing recently with WAL on a separate\ndrive, the WAL logs were doing more read&writes per second than the main\ndata drive. This would of course be affected by the queries you are\nrunning against the database. I was just running Tatsuo's TPC-B stuff,\nand the OSDB AS3AP tests.\n\n> I guess we forgot to monitor system parameters. Next on my list is running\n> vmstat, top and tuning bdflush.\n\nThat'll just be the start of it for serious performance tuning and\nlearning how PostgreSQL works. :)\n\n<snip>\n> Thanks once again...\n> Bye\n> Shridhar\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:56:34 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n> RedHat7.2/PostgreSQL7.1.3\n\nI'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...\n\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n\nWhat do you mean by \"char\" exactly? If it's really char(N), how much\nare you paying in padding space? There are very very few cases where\nI'd not say to use varchar(N), or text, instead. Also, does it have to\nbe character data? If you could use an integer or float datatype\ninstead the index operations should be faster (though I can't say by\nhow much). Have you thought carefully about the order in which the\ncomposite index columns are listed?\n\n> sort_mem = 12000\n\nTo create an index of this size, you want to push sort_mem as high as it\ncan go without swapping. 12000 sounds fine for the global setting, but\nin the process that will create the index, try setting sort_mem to some\nhundreds of megs or even 1Gb. (But be careful: the calculation of space\nactually used by CREATE INDEX is off quite a bit in pre-7.3 releases\n:-(. You should probably expect the actual process size to grow to two\nor three times what you set sort_mem to. Don't let it get so big as to\nswap.)\n\n> wal_buffers = 65536 \n\nThe above is a complete waste of memory space, which would be better\nspent on letting the kernel expand its disk cache. There's no reason\nfor wal_buffers to be more than a few dozen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:33:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing " }, { "msg_contents": "Justin Clift <[email protected]> writes:\n>> On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n>>> fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n>>> especially considering it's running on every transaction commit. Oh, your\n>>> WAL files are on a seperate disk from the data?\n\n> Not sure if this is a good idea. Would have to think deeply about the\n> controller and drive optimisation/load characteristics.\n\n> If it's any help, when I was testing recently with WAL on a separate\n> drive, the WAL logs were doing more read&writes per second than the main\n> data drive.\n\n... but way fewer seeks. For anything involving lots of updating\ntransactions (and certainly 5000 separate insertions per second would\nqualify; can those be batched??), it should be a win to put WAL on its\nown spindle, just to get locality of access to the WAL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:42:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "On 26 Sep 2002 at 10:33, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <[email protected]> writes:\n> > RedHat7.2/PostgreSQL7.1.3\n> \n> I'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...\n\nI agree.. downloadind 7.2.2 right away..\n\n> > Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> \n> What do you mean by \"char\" exactly? If it's really char(N), how much\n> are you paying in padding space? There are very very few cases where\n> I'd not say to use varchar(N), or text, instead. Also, does it have to\n> be character data? If you could use an integer or float datatype\n> instead the index operations should be faster (though I can't say by\n> how much). Have you thought carefully about the order in which the\n> composite index columns are listed?\n\nI have forwarded the idea of putting things into number. If it causes speedup \nin index lookup/creation, it would do. Looks like bigint is the order of the \nday..\n\n> \n> > sort_mem = 12000\n> \n> To create an index of this size, you want to push sort_mem as high as it\n> can go without swapping. 12000 sounds fine for the global setting, but\n> in the process that will create the index, try setting sort_mem to some\n> hundreds of megs or even 1Gb. (But be careful: the calculation of space\n> actually used by CREATE INDEX is off quite a bit in pre-7.3 releases\n> :-(. You should probably expect the actual process size to grow to two\n> or three times what you set sort_mem to. Don't let it get so big as to\n> swap.)\n\nGreat. I was skeptical to push it beyond 100MB. Now I can push it to corners..\n\n> > wal_buffers = 65536 \n> \n> The above is a complete waste of memory space, which would be better\n> spent on letting the kernel expand its disk cache. There's no reason\n> for wal_buffers to be more than a few dozen.\n\nThat was a rather desparate move. Nothing was improving performance and then we \nstarted pushing numbers.. WIll get it back.. Same goes for 64 WAL files.. A GB \nlooks like waste to me..\n\nI might have found the bottleneck, although by accident. Mysql was running out \nof space while creating index. So my friend shut down mysql and tried to move \nthings by hand to create links. He noticed that even things like cp were \nterribly slow and it hit us.. May be the culprit is the file system. Ext3 in \nthis case. \n\nMy friend argues for ext2 to eliminate journalling overhead but I favour \nreiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE \ndisk for 25 tps..\n\nWe will be attempting raiserfs and/or XFS if required. I know how much speed \ndifference exists between resiserfs and ext2. Would not be surprised if \neverythng just starts screaming in one go..\n\nBye\n Shridhar\n\n--\nCropp's Law:\tThe amount of work done varies inversly with the time spent in the\t\noffice.\n\n", "msg_date": "Thu, 26 Sep 2002 20:22:05 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing " }, { "msg_contents": "On 26 Sep 2002 at 10:42, Tom Lane wrote:\n\n> Justin Clift <[email protected]> writes:\n> > If it's any help, when I was testing recently with WAL on a separate\n> > drive, the WAL logs were doing more read&writes per second than the main\n> > data drive.\n> \n> ... but way fewer seeks. For anything involving lots of updating\n> transactions (and certainly 5000 separate insertions per second would\n> qualify; can those be batched??), it should be a win to put WAL on its\n> own spindle, just to get locality of access to the WAL.\n\nProbably they will be a single transcation. If possible we will bunch more of \nthem together.. like 5 seconds of data pushed down in a single transaction but \nnot sure it's possible..\n\nThis is bit like replication but from live oracle machine to postgres, from \ninformation I have. So there should be some chance of tuning there..\n\nBye\n Shridhar\n\n--\nLangsam's Laws:\t(1) Everything depends.\t(2) Nothing is always.\t(3) Everything \nis sometimes.\n\n", "msg_date": "Thu, 26 Sep 2002 20:28:11 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "On Thursday 26 September 2002 21:52, Shridhar Daithankar wrote:\n\n> I might have found the bottleneck, although by accident. Mysql was running\n> out of space while creating index. So my friend shut down mysql and tried\n> to move things by hand to create links. He noticed that even things like cp\n> were terribly slow and it hit us.. May be the culprit is the file system.\n> Ext3 in this case.\n>\n> My friend argues for ext2 to eliminate journalling overhead but I favour\n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB\n> IDE disk for 25 tps..\n>\n> We will be attempting raiserfs and/or XFS if required. I know how much\n> speed difference exists between resiserfs and ext2. Would not be surprised\n> if everythng just starts screaming in one go..\n\nAs it was found by someone before any non-journaling FS is faster than\njournaling one. This due to double work done by FS and database.\n\nTry it on ext2 and compare.\n\n--\nDenis\n\n", "msg_date": "Thu, 26 Sep 2002 22:04:41 +0700", "msg_from": "Denis Perchine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> My friend argues for ext2 to eliminate journalling overhead but I favour\n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE\n> disk for 25 tps..\n\nIf it's any help, the setup I mentioned before with differnt disks for\nthe data and the WAL files was getting an average of about 72 tps with\n200 concurrent users on pgbench. Haven't tuned it in a hard core way at\nall, and it only has 256MB DDR RAM in it at the moment (single CPU\nAthonXP 1600). These are figures made during the 2.5k+ test runs of\npgbench done when developing pg_autotune recently.\n\nAs a curiosity point, how predictable are the queries you're going to be\nrunning on your database? They sound very simple and very predicatable.\n\nThe pg_autotune tool might be your friend here. It can deal with\narbitrary SQL instead of using the pg_bench stuff of Tatsuos, and it can\nalso deal with an already loaded database. You'd just have to tweak the\nnames of the tables that it vacuums and the names of the indexes that it\nreindexes between each run, to get some idea of your overall server\nperformance at different load points.\n\nProbably worth taking a good look at if you're not afraid of editing\nvariables in C code. :)\n \n> We will be attempting raiserfs and/or XFS if required. I know how much speed\n> difference exists between resiserfs and ext2. Would not be surprised if\n> everythng just starts screaming in one go..\n\nWe'd all probably be interested to hear this. Added the PostgreSQL\n\"Performance\" mailing list to this thread too, Just In Case. (wow that's\na lot of cross posting now).\n\nRegards and best wishes,\n\nJustin Clift\n \n> Bye\n> Shridhar\n> \n> --\n> Cropp's Law: The amount of work done varies inversly with the time spent in the\n> office.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 27 Sep 2002 01:12:49 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 27 Sep 2002 at 1:12, Justin Clift wrote:\n\n> Shridhar Daithankar wrote:\n> As a curiosity point, how predictable are the queries you're going to be\n> running on your database? They sound very simple and very predicatable.\n\nMostly predictable selects. Not a domain expert on telecom so not very sure. \nBut in my guess prepare statement in 7.3 should come pretty handy. i.e. by the \ntime we finish evaluation and test deployment, 7.3 will be out in next couple \nof months to say so. So I would recommend doing it 7.3 way only..\n> \n> The pg_autotune tool might be your friend here. It can deal with\n> arbitrary SQL instead of using the pg_bench stuff of Tatsuos, and it can\n> also deal with an already loaded database. You'd just have to tweak the\n> names of the tables that it vacuums and the names of the indexes that it\n> reindexes between each run, to get some idea of your overall server\n> performance at different load points.\n> \n> Probably worth taking a good look at if you're not afraid of editing\n> variables in C code. :)\n\nGladly. We started with altering pgbench here for testing and rapidly settled \nto perl generated random queries. Once postgresql wins the evaluation match and \nthings come to implementation, pg_autotune would be a handy tool. Just that \ncan't do it right now. Have to fight mysql and SAP DB before that..\n\nBTW any performance figures on SAP DB? People here are as it frustrated with it \nwith difficulties in setting it up. But still..\n> \n\n> > We will be attempting raiserfs and/or XFS if required. I know how much speed\n> > difference exists between resiserfs and ext2. Would not be surprised if\n> > everythng just starts screaming in one go..\n> \n> We'd all probably be interested to hear this. Added the PostgreSQL\n> \"Performance\" mailing list to this thread too, Just In Case. (wow that's\n> a lot of cross posting now).\n\nI know..;-) Glad that PG list does not have strict policies like no non-\nsubscriber posting or no attachments.. etc.. \n\nIMO reiserfs, though journalling one, is faster than ext2 etc. because the way \nit handles metadata. Personally I haven't come across ext2 being faster than \nreiserfs on few machine here for day to day use.\n\nI guess I should have a freeBSD CD handy too.. Just to give it a try. If it \ncomes down to a better VM.. though using 2.4.19 here.. so souldn't matter \nmuch..\n\nI will keep you guys posted on file system stuff... Glad that we have much \nflexibility with postgresql..\n\nBye\n Shridhar\n\n--\nBilbo's First Law:\tYou cannot count friends that are all packed up in barrels.\n\n", "msg_date": "Thu, 26 Sep 2002 20:59:01 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 09:52, Shridhar Daithankar wrote:\n> My friend argues for ext2 to eliminate journalling overhead but I favour \n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE \n> disk for 25 tps..\n> \n> We will be attempting raiserfs and/or XFS if required. I know how much speed \n> difference exists between resiserfs and ext2. Would not be surprised if \n> everythng just starts screaming in one go..\n> \n\nI'm not sure about reiserfs or ext3 but with XFS, you can create your\nlog on another disk. Also worth noting is that you can also configure\nthe size and number of log buffers. There are also some other\nperformance type enhancements you can fiddle with if you don't mind\nrisking time stamp consistency in the event of a crash. If your setup\nallows for it, you might want to consider using XFS in this\nconfiguration.\n\nWhile I have not personally tried moving XFS' log to another device,\nI've heard that performance gains can be truly stellar. Assuming memory\nallows, twiddling with the log buffering is said to allow for large\nstrides in performance as well.\n\nIf you do try this, I'd love to hear back about your results and\nimpressions.\n\nGreg", "msg_date": "26 Sep 2002 10:41:37 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n> I might have found the bottleneck, although by accident. Mysql was running out \n> of space while creating index. So my friend shut down mysql and tried to move \n> things by hand to create links. He noticed that even things like cp were \n> terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> this case. \n\nI just added a file system and multi-cpu section to my performance\ntuning paper:\n\n\thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n\nThe paper does recommend ext3, but the differences between file systems\nare very small. If you are seeing 'cp' as slow, I wonder if it may be\nsomething more general, like poorly tuned hardware or something. You can\nuse 'dd' to throw some data around the file system and see if that is\nshowing slowness; compare those numbers to another machine that has\ndifferent hardware/OS.\n\nAlso, though ext3 is slower, turning fsync off should make ext3 function\nsimilar to ext2. That would be an interesting test if you suspect ext3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 12:41:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n\n> I'm not sure about reiserfs or ext3 but with XFS, you can create your\n> log on another disk. Also worth noting is that you can also configure\n> the size and number of log buffers. There are also some other\n> performance type enhancements you can fiddle with if you don't mind\n> risking time stamp consistency in the event of a crash. If your setup\n> allows for it, you might want to consider using XFS in this\n> configuration.\n\nYou can definitely put the ext3 log on a different disk with 2.4\nkernels. \n\nAlso, if you put the WAL logs on a different disk from the main\ndatabase, and mount that partition with 'data=writeback' (ie\nmetadata-only journaling) ext3 should be pretty fast, since WAL files\nare preallocated and there will therefore be almost no metadata\nupdates.\n\nYou should be able to mount the main database with \"data=ordered\" (the\ndefault) for good performance and reasonable safety.\n\nI think putting WAL on its own disk(s) is one of the keys here.\n\n-Doug\n", "msg_date": "26 Sep 2002 13:16:36 -0400", "msg_from": "Doug cNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n> > I might have found the bottleneck, although by accident. Mysql was running out \n> > of space while creating index. So my friend shut down mysql and tried to move \n> > things by hand to create links. He noticed that even things like cp were \n> > terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> > this case. \n> \n> I just added a file system and multi-cpu section to my performance\n> tuning paper:\n> \n> \thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> The paper does recommend ext3, but the differences between file systems\n> are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> something more general, like poorly tuned hardware or something. You can\n> use 'dd' to throw some data around the file system and see if that is\n> showing slowness; compare those numbers to another machine that has\n> different hardware/OS.\n\n\nThat's a good point. Also, if you're using IDE, you do need to verify\nthat you're using DMA and proper PIO mode if at possible. Also, big\nperformance improvements can be seen by making sure your IDE bus speed\nhas been properly configured. The drivetweak-gtk and hdparm utilities\ncan make huge difference in performance. Just be sure you know what the\nheck your doing when you mess with those.\n\nGreg", "msg_date": "26 Sep 2002 12:36:57 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n> > I might have found the bottleneck, although by accident. Mysql was running out \n> > of space while creating index. So my friend shut down mysql and tried to move \n> > things by hand to create links. He noticed that even things like cp were \n> > terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> > this case. \n> \n> I just added a file system and multi-cpu section to my performance\n> tuning paper:\n> \n> \thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> The paper does recommend ext3, but the differences between file systems\n> are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> something more general, like poorly tuned hardware or something. You can\n> use 'dd' to throw some data around the file system and see if that is\n> showing slowness; compare those numbers to another machine that has\n> different hardware/OS.\n> \n> Also, though ext3 is slower, turning fsync off should make ext3 function\n> similar to ext2. That would be an interesting test if you suspect ext3.\n\nI'm curious as to why you recommended ext3 versus some other (JFS,\nXFS). Do you have tests which validate that recommendation or was it a\nsimple matter of getting the warm fuzzies from familiarity?\n\nGreg", "msg_date": "26 Sep 2002 12:44:22 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "If you are seeing very slow performance on a drive set, check dmesg to see \nif you're getting SCSI bus errors or something similar. If your drives \naren't properly terminated then the performance will suffer a great deal.\n\n", "msg_date": "Thu, 26 Sep 2002 12:41:55 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland wrote:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> > something more general, like poorly tuned hardware or something. You can\n> > use 'dd' to throw some data around the file system and see if that is\n> > showing slowness; compare those numbers to another machine that has\n> > different hardware/OS.\n> > \n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2. That would be an interesting test if you suspect ext3.\n> \n> I'm curious as to why you recommended ext3 versus some other (JFS,\n> XFS). Do you have tests which validate that recommendation or was it a\n> simple matter of getting the warm fuzzies from familiarity?\n\nI used the attached email as a reference. I just changed the wording to\nbe:\n\t\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3 and xfs are journal-based, and Reiser is\n\toptimized for small files. Fortunately, the journaling file systems\n\taren't significantly slower than ext2 so they are probably the best\n\tchoice.\n\nso I don't specifically recommend ext3 anymore. As I remember, ext3 is\ngood only in that it can read ext2 file systems. I think XFS may be the\nbest bet.\n\nCan anyone clarify if \"data=writeback\" is safe for PostgreSQL. \nSpecifically, are the data files recovered properly or is this option\nonly for a filesystem containing WAL?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073", "msg_date": "Thu, 26 Sep 2002 16:00:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> The paper does recommend ext3, but the differences between file systems\n> are very small.\n\nWell, I only did a very rough benchmark (a few runs of pgbench), but\nthe results I found were drastically different: ext2 was significantly\nfaster (~50%) than ext3-writeback, which was in turn significantly\nfaster (~25%) than ext3-ordered.\n\n> Also, though ext3 is slower, turning fsync off should make ext3 function\n> similar to ext2.\n\nWhy would that be?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 16:41:49 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <[email protected]> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n> \n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n\nWow. That leaves no good Linux file system alternatives. PostgreSQL\njust wants an ordinary file system that has reliable recovery from a\ncrash.\n\n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n> \n> Why would that be?\n\nI assumed it was the double fsync for the normal and journal that made\nthe journalling file systems slog.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:45:54 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I have seen various benchmarks where XFS seems to perform best when it \ncomes to huge amounts of data and many files (due to balanced internal \nb+ trees).\nalso, XFS seems to be VERY mature and very stable.\next2/3 don't seem to be that fast in most of the benchmarks.\n\ni did some testing with reiser some time ago. the problem is that it \nseems to restore a very historic consistent snapshot of the data. XFS \nseems to be much better in this respect.\n\ni have not tested JFS yet (but on this damn AIX beside me)\nfrom my point of view i strongly recommend XFS (maybe somebody from \nRedHat should think about it).\n\n Hans\n\n\nNeil Conway wrote:\n\n>Bruce Momjian <[email protected]> writes:\n> \n>\n>>The paper does recommend ext3, but the differences between file systems\n>>are very small.\n>> \n>>\n>\n>Well, I only did a very rough benchmark (a few runs of pgbench), but\n>the results I found were drastically different: ext2 was significantly\n>faster (~50%) than ext3-writeback, which was in turn significantly\n>faster (~25%) than ext3-ordered.\n>\n> \n>\n>>Also, though ext3 is slower, turning fsync off should make ext3 function\n>>similar to ext2.\n>> \n>>\n>\n>Why would that be?\n>\n>Cheers,\n>\n>Neil\n>\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Thu, 26 Sep 2002 22:55:30 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <[email protected]> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n> \n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n> \n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n> \n> Why would that be?\n\nOK, I changed the text to:\n\t\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\n\toptimized for small files and does journalling. The journalling file\n\tsystems can be significantly slower than ext2 but when crash recovery is\n\trequired, ext2 isn't an option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:57:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Wow. That leaves no good Linux file system alternatives.\n> PostgreSQL just wants an ordinary file system that has reliable\n> recovery from a crash.\n\nI'm not really familiar with the reasoning behind ext2's reputation as\nrecovering poorly from crashes; if we fsync a WAL record to disk\nbefore we lose power, can't we recover reliably, even with ext2?\n\n> > > Also, though ext3 is slower, turning fsync off should make ext3\n> > > function similar to ext2.\n> > \n> > Why would that be?\n> \n> I assumed it was the double fsync for the normal and journal that\n> made the journalling file systems slog.\n\nWell, a journalling file system would need to write a journal entry\nand flush that to disk, even if fsync is disabled -- whereas without\nfsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\nthe performance advantage of ext2 over ext3 is should be even larger\nwhen fsync is not enabled.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 17:03:26 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I tend to agree with this though I have nothing to back up it with. My\nimpression is that XFS does very well for large files. Accepting that\nas fact?, my impression is that XFS historically does well for\ndatabase's. Again, I have nothing to back that up other than hear-say\nand conjecture.\n\nGreg\n\n\nOn Thu, 2002-09-26 at 15:55, Hans-Jürgen Schönig wrote:\n> I have seen various benchmarks where XFS seems to perform best when it \n> comes to huge amounts of data and many files (due to balanced internal \n> b+ trees).\n> also, XFS seems to be VERY mature and very stable.\n> ext2/3 don't seem to be that fast in most of the benchmarks.\n> \n> i did some testing with reiser some time ago. the problem is that it \n> seems to restore a very historic consistent snapshot of the data. XFS \n> seems to be much better in this respect.\n> \n> i have not tested JFS yet (but on this damn AIX beside me)\n> from my point of view i strongly recommend XFS (maybe somebody from \n> RedHat should think about it).\n> \n> Hans\n> \n> \n> Neil Conway wrote:\n> \n> >Bruce Momjian <[email protected]> writes:\n> > \n> >\n> >>The paper does recommend ext3, but the differences between file systems\n> >>are very small.\n> >> \n> >>\n> >\n> >Well, I only did a very rough benchmark (a few runs of pgbench), but\n> >the results I found were drastically different: ext2 was significantly\n> >faster (~50%) than ext3-writeback, which was in turn significantly\n> >faster (~25%) than ext3-ordered.\n> >\n> > \n> >\n> >>Also, though ext3 is slower, turning fsync off should make ext3 function\n> >>similar to ext2.\n> >> \n> >>\n> >\n> >Why would that be?\n> >\n> >Cheers,\n> >\n> >Neil\n> >\n> > \n> >\n> \n> \n> -- \n> *Cybertec Geschwinde u Schoenig*\n> Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria\n> Tel: +43/1/913 68 09; +43/664/233 90 75\n> www.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n> <http://cluster.postgresql.at>, www.cybertec.at \n> <http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]", "msg_date": "26 Sep 2002 16:03:51 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Has there been any thought of providing RAW disk support to bypass the fs?\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Bruce Momjian\nSent: Thursday, September 26, 2002 3:57 PM\nTo: Neil Conway\nCc: [email protected]; [email protected];\[email protected]\nSubject: Re: [HACKERS] [GENERAL] Performance while loading data and\nindexing\n\n\nNeil Conway wrote:\n> Bruce Momjian <[email protected]> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n>\n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n>\n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n>\n> Why would that be?\n\nOK, I changed the text to:\n\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\n\toptimized for small files and does journalling. The journalling file\n\tsystems can be significantly slower than ext2 but when crash recovery is\n\trequired, ext2 isn't an option.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Thu, 26 Sep 2002 16:06:07 -0500", "msg_from": "\"James Maes\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Wow. That leaves no good Linux file system alternatives.\n> > PostgreSQL just wants an ordinary file system that has reliable\n> > recovery from a crash.\n> \n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n> \n> > > > Also, though ext3 is slower, turning fsync off should make ext3\n> > > > function similar to ext2.\n> > > \n> > > Why would that be?\n> > \n> > I assumed it was the double fsync for the normal and journal that\n> > made the journalling file systems slog.\n> \n> Well, a journalling file system would need to write a journal entry\n> and flush that to disk, even if fsync is disabled -- whereas without\n> fsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\n> the performance advantage of ext2 over ext3 is should be even larger\n> when fsync is not enabled.\n\nYes, it is still double-writing. I just thought that if that wasn't\nhappening while the db was waiting for a commit that it wouldn't be too\nbad.\n\nIs it just me or do all the Linux file systems seem like they are\nlacking something when PostgreSQL is concerned? We just want a UFS-like\nfile system on Linux and no one has it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:07:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Wow. That leaves no good Linux file system alternatives.\n> > PostgreSQL just wants an ordinary file system that has reliable\n> > recovery from a crash.\n> \n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n\nWell, I have experienced data loss from ext2 before. Also, recovery\nfrom crashes on large file systems take a very, very long time. I can't\nimagine anyone running a production database on an ext2 file system\nhaving 10's or even 100's of GB. Ouch. Recovery would take forever! \nEven recovery on small file systems (2-8G) can take extended periods of\ntime. Especially so on IDE systems. Even then manual intervention is\nnot uncommon.\n\nWhile I can't say that x, y or z is the best FS to use on Linux, I can\nsay that ext2 is probably an exceptionally poor choice from a\nreliability and/or uptime perspective.\n\nGreg", "msg_date": "26 Sep 2002 16:09:15 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland <[email protected]> writes:\n> On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > I'm not really familiar with the reasoning behind ext2's\n> > reputation as recovering poorly from crashes; if we fsync a WAL\n> > record to disk before we lose power, can't we recover reliably,\n> > even with ext2?\n> \n> Well, I have experienced data loss from ext2 before. Also, recovery\n> from crashes on large file systems take a very, very long time.\n\nYes, but wouldn't you face exactly the same issues if you ran a\nUFS-like filesystem in asynchronous mode? Albeit it's not the default,\nbut performance in synchronous mode is usually pretty poor.\n\nThe fact that ext2 defaults to asynchronous mode and UFS (at least on\nthe BSDs) defaults to synchronous mode seems like a total non-issue to\nme. Is there any more to the alleged difference in reliability?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 17:17:30 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Can anyone clarify if \"data=writeback\" is safe for PostgreSQL. \n> Specifically, are the data files recovered properly or is this option\n> only for a filesystem containing WAL?\n\n\"data=writeback\" means that no data is journaled, just metadata (which\nis like XFS or Reiser). An fsync() call should still do what it\nnormally does, commit the writes to disk before returning.\n\n\"data=journal\" journals all data and is the slowest and safest.\n\"data=ordered\" writes out data blocks before committing a journal\ntransaction, which is faster than full data journaling (since data\ndoesn't get written twice) and almost as safe. \"data=writeback\" is\nnoted to keep obsolete data in the case of some crashes (since the\ndata may not have been written yet) but a completed fsync() should\nensure that the data is valid.\n\nSo I guess I'd probably use data=ordered for an all-on-one-fs\ninstallation, and data=writeback for a WAL-only drive.\n\nHope this helps...\n\n-Doug\n", "msg_date": "26 Sep 2002 17:31:55 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n\nUp to a point. We do assume that the filesystem won't lose checkpointed\n(sync'd) writes to data files. To the extent that the filesystem is\nvulnerable to corruption of its own metadata for a file (indirect blocks\nor whatever ext2 uses), that's not a completely safe assumption.\n\nWe'd be happiest with a filesystem that journals its own metadata and\nnot the user data in the file(s). I dunno if there are any.\n\nHmm, maybe this is why Oracle likes doing their own filesystem on a raw\ndevice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 17:32:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> We'd be happiest with a filesystem that journals its own metadata and\n> not the user data in the file(s). I dunno if there are any.\n\next3 with data=writeback? (See my previous message to Bruce).\n\n-Doug\n", "msg_date": "26 Sep 2002 17:37:10 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Greg Copeland <[email protected]> writes:\n> > On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > > I'm not really familiar with the reasoning behind ext2's\n> > > reputation as recovering poorly from crashes; if we fsync a WAL\n> > > record to disk before we lose power, can't we recover reliably,\n> > > even with ext2?\n> > \n> > Well, I have experienced data loss from ext2 before. Also, recovery\n> > from crashes on large file systems take a very, very long time.\n> \n> Yes, but wouldn't you face exactly the same issues if you ran a\n> UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> but performance in synchronous mode is usually pretty poor.\n\nYes, before UFS had soft updates, the synchronous nature of UFS made it\nslower than ext2, but now with soft updates, that performance difference\nis gone so you have two files systems, ext2 and ufs, similar peformance,\nbut one is crash-safe and the other is not.\n\nAnd, when comparing the journalling file systems, you have UFS vs.\nXFS/ext3/JFS/Reiser, and UFS is faster. The only thing the journalling\nfile system give you is more rapid reboot, but frankly, if your OS goes\ndown often enough so that is an issue, you have bigger problems than\nfsync time.\n\nThe big problem is that Linux went from non-crash safe right to\ncrash-safe and reboot quick. We need a middle ground, which is where\nUFS/soft updates is.\n\n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nThe reliability problem isn't alleged. ext2 developers admits ext2\nisn't 100% crash-safe. They will say it is usually crash-safe, but that\nisn't good enough for PostgreSQL.\n\nI wish I was wrong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:39:14 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Doug McNaught wrote:\n> Tom Lane <[email protected]> writes:\n> \n> > We'd be happiest with a filesystem that journals its own metadata and\n> > not the user data in the file(s). I dunno if there are any.\n> \n> ext3 with data=writeback? (See my previous message to Bruce).\n\nOK, so that makes ext3 crash safe without lots of overhead?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:41:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 17:39, Bruce Momjian wrote:\n> Neil Conway wrote:\n> > Greg Copeland <[email protected]> writes:\n> > > On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > > > I'm not really familiar with the reasoning behind ext2's\n> > > > reputation as recovering poorly from crashes; if we fsync a WAL\n> > > > record to disk before we lose power, can't we recover reliably,\n> > > > even with ext2?\n> > > \n> > > Well, I have experienced data loss from ext2 before. Also, recovery\n> > > from crashes on large file systems take a very, very long time.\n> > \n> > Yes, but wouldn't you face exactly the same issues if you ran a\n> > UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> > but performance in synchronous mode is usually pretty poor.\n> \n> Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> slower than ext2, but now with soft updates, that performance difference\n> is gone so you have two files systems, ext2 and ufs, similar peformance,\n> but one is crash-safe and the other is not.\n\nNote entirely true. ufs is both crash-safe and quick-rebootable. You\ndo need to fsck at some point, but not prior to mounting it. Any\ncorrupt blocks are empty, and are easy to avoid.\n\nSomeone just needs to implement a background fsck that will run on a\nmounted filesystem.\n\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 17:45:23 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Rod Taylor wrote:\n> > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > slower than ext2, but now with soft updates, that performance difference\n> > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > but one is crash-safe and the other is not.\n> \n> Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> do need to fsck at some point, but not prior to mounting it. Any\n> corrupt blocks are empty, and are easy to avoid.\n\nI am assuming you need to mount the drive as part of the reboot. Of\ncourse you can boot fast with any file system if you don't have to mount\nit. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:47:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:\n> Rod Taylor wrote:\n> > > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > > slower than ext2, but now with soft updates, that performance difference\n> > > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > > but one is crash-safe and the other is not.\n> > \n> > Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> > do need to fsck at some point, but not prior to mounting it. Any\n> > corrupt blocks are empty, and are easy to avoid.\n> \n> I am assuming you need to mount the drive as part of the reboot. Of\n> course you can boot fast with any file system if you don't have to mount\n> it. :-)\n\nSorry, poor explanation.\n\nBackground fsck (when implemented) would operate on a currently mounted\n(and active) file system. The only reason fsck is required prior to\nreboot now is because no-one had done the work.\n\nhttp://www.freebsd.org/cgi/man.cgi?query=fsck&sektion=8&manpath=FreeBSD+5.0-current\n\nSee the first paragraph of the above.\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 18:03:36 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Rod Taylor wrote:\n> On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:\n> > Rod Taylor wrote:\n> > > > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > > > slower than ext2, but now with soft updates, that performance difference\n> > > > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > > > but one is crash-safe and the other is not.\n> > > \n> > > Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> > > do need to fsck at some point, but not prior to mounting it. Any\n> > > corrupt blocks are empty, and are easy to avoid.\n> > \n> > I am assuming you need to mount the drive as part of the reboot. Of\n> > course you can boot fast with any file system if you don't have to mount\n> > it. :-)\n> \n> Sorry, poor explanation.\n> \n> Background fsck (when implemented) would operate on a currently mounted\n> (and active) file system. The only reason fsck is required prior to\n> reboot now is because no-one had done the work.\n> \n> http://www.freebsd.org/cgi/man.cgi?query=fsck&sektion=8&manpath=FreeBSD+5.0-current\n> \n> See the first paragraph of the above.\n\nOh, yes, I have heard of that missing feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 18:04:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> Doug McNaught wrote:\n> > Tom Lane <[email protected]> writes:\n> > \n> > > We'd be happiest with a filesystem that journals its own metadata and\n> > > not the user data in the file(s). I dunno if there are any.\n> > \n> > ext3 with data=writeback? (See my previous message to Bruce).\n> \n> OK, so that makes ext3 crash safe without lots of overhead?\n\nMetadata is journaled so you shouldn't lose data blocks or directory\nentries. Some data blocks (that haven't been fsync()'ed) may have old\nor wrong data in them, but I think that's the same as ufs, right? And\nWAL replay should take care of that.\n\nIt'd be very interesting to do some tests of the various journaling\nmodes. I have an old K6 that I might be able to turn into a\nhit-the-reset-switch-at-ramdom-times machine. What kind of tests\nshould be run?\n\n-Doug\n", "msg_date": "26 Sep 2002 19:26:03 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Doug McNaught <[email protected]> writes:\n> \"data=writeback\" means that no data is journaled, just metadata (which\n> is like XFS or Reiser). An fsync() call should still do what it\n> normally does, commit the writes to disk before returning.\n> \"data=journal\" journals all data and is the slowest and safest.\n> \"data=ordered\" writes out data blocks before committing a journal\n> transaction, which is faster than full data journaling (since data\n> doesn't get written twice) and almost as safe. \"data=writeback\" is\n> noted to keep obsolete data in the case of some crashes (since the\n> data may not have been written yet) but a completed fsync() should\n> ensure that the data is valid.\n\nThanks for the explanation.\n\n> So I guess I'd probably use data=ordered for an all-on-one-fs\n> installation, and data=writeback for a WAL-only drive.\n\nActually I think the ideal thing for Postgres would be data=writeback\nfor both data and WAL drives. We can handle loss of un-fsync'd data\nfor ourselves in both cases.\n\nOf course, if you keep anything besides Postgres data files on a\npartition, you'd possibly want the more secure settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 23:07:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "Hello!\n\nOn Thu, 26 Sep 2002, Bruce Momjian wrote:\n\n> > I'm not really familiar with the reasoning behind ext2's reputation as\n> > recovering poorly from crashes; if we fsync a WAL record to disk\n\nOn relatively big volumes ext2 recovery can end up in formatting the fs \nunder certain cirrumstances.;-)\n\n> > > I assumed it was the double fsync for the normal and journal that\n> > > made the journalling file systems slog.\n> > \n> > Well, a journalling file system would need to write a journal entry\n> > and flush that to disk, even if fsync is disabled -- whereas without\n> > fsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\n> > the performance advantage of ext2 over ext3 is should be even larger\n> > when fsync is not enabled.\n> \n> Yes, it is still double-writing. I just thought that if that wasn't\n> happening while the db was waiting for a commit that it wouldn't be too\n> bad.\n> \n> Is it just me or do all the Linux file systems seem like they are\n> lacking something when PostgreSQL is concerned? We just want a UFS-like\n> file system on Linux and no one has it.\n\nmount -o sync an ext2 volume on Linux - and you can get a \"UFS-like\" fs.:)\nmount -o async an FFS volume on FreeBSD - and you can get boost in fs \nperformance.\nPersonally me always mount ext2 fs where Pg is living with sync option.\nFsync in pg is off (since 6.3), this way successfully pass thru a few \nserious crashes on various systems (mostly on power problems).\nIf fsync is on in Pg, performance gets so-oh-oh-oh-oh slowly!=)\nI just have done upgrade from 2.2 kernel on ext2 to ext3 capable 2.4 one\nso I'm planning to do some benchmarking. Roughly saying w/o benchmarks, \nthe performance have been degraded in 2/3 proportion.\n\"But better safe then sorry\".\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: [email protected].\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Fri, 27 Sep 2002 12:14:40 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "[email protected] (Neil Conway) writes:\n\n[snip]\n> > Well, I have experienced data loss from ext2 before. Also, recovery\n> > from crashes on large file systems take a very, very long time.\n> \n> Yes, but wouldn't you face exactly the same issues if you ran a\n> UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> but performance in synchronous mode is usually pretty poor.\n> \n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nUFS on most unix systems (BSD, solaris etc) defaults to sync\nmetadata, async data which is a mode that is completely missing\nfrom ext2 as far as I know.\n\nThis is why UFS is considered safer than ext2. (Running with\n'sync' is too slow to be a usable alternative in most cases.)\n\n _\nMats Lofkvist\[email protected]\n\n\nPS The BSD soft updates yields the safety of the default sync\n metadata / async data mode while being at least as fast as\n running fully async.\n", "msg_date": "27 Sep 2002 12:40:13 +0200", "msg_from": "Mats Lofkvist <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "[email protected] (\"Shridhar Daithankar\") writes:\n\n[snip]\n> \n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> setup..\n> \n\nRAID5 is not the best for performance, especially write performance.\nIf it is software RAID it is even worse :-).\n\n(Note also that you need to check that you are not saturating the\nnumber of seeks the disks can handle, not just the bandwith.)\n\nStriping should be better (combined with mirroring if you need the\nsafety, but with both striping and mirroring you may need multiple\nSCSI channels).\n\n _\nMats Lofkvist\[email protected]\n", "msg_date": "27 Sep 2002 12:49:17 +0200", "msg_from": "Mats Lofkvist <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 27 Sep 2002, Mats Lofkvist wrote:\n\n> [email protected] (\"Shridhar Daithankar\") writes:\n> \n> [snip]\n> > \n> > Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> > setup..\n> > \n> \n> RAID5 is not the best for performance, especially write performance.\n> If it is software RAID it is even worse :-).\n\nI take exception to this. RAID5 is a great choice for most folks.\n\n1: RAID5 only writes out the parity stripe and data stripe, not all \nstripes when writing. So, in an 8 disk RAID5 array, writing to a single \n64 k stripe involves one 64k read (parity stripe) and two 64k writes.\n\nOn a mirror set, writing to one 64k stripe involves two 64k writes. The \ndifference isn't that great, and in my testing, a large enough RAID5 \nprovides so much faster read speads by spreading the reads across so many \nheads as to more than make up for the slightly slower writes. My testing \nhas shown that a 4 disk RAID5 can generally run about 85% or more the \nspeed of a mirror set.\n\n2: Why does EVERYONE have to jump on the bandwagon that software RAID 5 \nis bad. My workstation running RH 7.2 uses about 1% of the CPU during \nvery heavy parallel access (i.e. 50 simo pgbenchs) at most. I've seen \nmany hardware RAID cards that are noticeable slower than my workstation \nrunning software RAID. You do know that hardware RAID is just software \nRAID where the processing is done on a seperate CPU on a card, but it's \nstill software doing the work.\n\n3: We just had a hardware RAID card mark both drives in a mirror set bad. \nIt wouldn't accept them back, and all the data was gone. poof. That \nwould never happen in Linux's kernel software RAID, I can always make \nLinux take back a \"bad\" drive.\n\n\nThe only difference between RAID5 with n+1 disks and RAID0 with n disks is \nthat we have to write a parity stripe in RAID5. It's ability to handle \nhigh parallel load is much better than a RAID1 set, and on average, you \nactually write about the same amount with either RAID1 or RAID5.\n\nDon't dog software RAID5, it works and it works well in Linux. Windows, \nhowever, is another issue. There, the software RAID5 is pretty pitiful, \nboth in terms of performance and maintenance.\n\n", "msg_date": "Fri, 27 Sep 2002 09:16:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> We'd be happiest with a filesystem that journals its own metadata and\n> not the user data in the file(s). I dunno if there are any.\n\nMost journalling file systems work this way. Data journalling is not\nvery widespread, AFAIK.\n\n-- \nFlorian Weimer \t [email protected]\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n", "msg_date": "Fri, 27 Sep 2002 21:01:38 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "scott.marlowe wrote:\n\n>(snippage)\n>I take exception to this. RAID5 is a great choice for most folks.\n>\n>\nI agree - certainly RAID5 *used* to be rather sad, but modern cards have \nimproved this no end on the hardware side - e.g.\n\nI recently benchmarked a 3Ware 8x card on a system with 4 x 15000 rpm \nMaxtor 70Gb drives and achieved 120 Mb/s for (8K) reads and 60 Mb/s for \n(8K) writes using RAID5. I used Redhat 7.3 + ext2. The benchmarking \nprogram was Bonnie.\n\nGiven that the performance of a single disk was ~30 Mb/s for reads and \nwrites, I felt this was quite a good result ! ( Other cards I had tried \npreviously struggled to maintain 1/2 the write rate of a single disk in \nsuch a configuration).\n\nAs for software RAID5, I have not tried it out.\n\nOf course I could not get 60Mb/s while COPYing data into Postgres... \ntypically cpu seemed to be the bottleneck in this case (what was the \nactual write rate? I hear you asking..err.. cant recall I'm afraid.. \nmust try it out again )\n\ncheers\n\nMark\n\n", "msg_date": "Sat, 28 Sep 2002 13:38:52 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Some of you may be interested in this seemingly exhaustive benchmark\nbetween ext2/3, ReiserFS, JFS, and XFS.\n\nhttp://www.osdl.org/presentations/lwe-jgfs.pdf\n\n\n", "msg_date": "03 Oct 2002 16:09:56 -0700", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Hey, excellent. Thanks!\n\nBased on that, it appears that XFS is a pretty good FS to use. For me,\nthe real surprise was how well reiserfs performed.\n\nGreg\n\nOn Thu, 2002-10-03 at 18:09, Mike Benoit wrote:\n> Some of you may be interested in this seemingly exhaustive benchmark\n> between ext2/3, ReiserFS, JFS, and XFS.\n> \n> http://www.osdl.org/presentations/lwe-jgfs.pdf\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "03 Oct 2002 18:35:34 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> Hey, excellent. Thanks!\n> \n> Based on that, it appears that XFS is a pretty good FS to use. For me,\n> the real surprise was how well reiserfs performed.\n> \n\nOK, hardware performance paper updated:\n\n---------------------------------------------------------------------------\n\nFile system choice is particularly difficult on Linux because there are\nso many file system choices, and none of them are optimal: ext2 is not\nentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\noptimized for small files and does journalling. The journalling file\nsystems can be significantly slower than ext2 but when crash recovery is\nrequired, ext2 isn't an option. If ext2 must be used, mount it with sync\nenabled. Some people recommend xfs or an ext3 filesystem mounted with\ndata=writeback.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 19:59:56 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002, Neil Conway wrote:\n\n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nIt was sort of pointed out here, but perhaps not made completely\nclear, that Berkley FFS defaults to synchronous meta-data updates,\nbut asynchronous data updates. You can also specify entirely\nsynchronous or entirely asynchronous updates. Linux ext2fs supports\nonly these last two modes, which is the problem.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 00:52:24 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" } ]
[ { "msg_contents": "According to ext3 hackers (Stephen Tweedie, Andrew Morton). ext3\ndata=journal mode is much faster than any of the other mode for\nworkloads which do a lot of syncrhonous i/o. Personally, I have seen\ndramatic improvements on moving mail queues to this mode (postfix in\nparticularly flies with this mode)\n\nWhile this may seem contradictory (forcing journaling for the data in\naddition to the metadata), it will likely improve the performance for\nsync I/O loads like mail servers because it can do all of the I/O to the\njournal without any seek or sync overhead while the mail is arriving.\n\nI assume that since Postgresql does a lot of fsyncs, it would benefit\nalso. I have sent email to Sridhar asking if he could test this\n\nAnother thing to note is that Linux 2.4.x kernels < 2.4.20-pre4 use\nbounce buffer's to do IO if the machine has > 1GB memory. Distributor\nkernels such as Redhat/Suse/Mandrake are patched to do IO via DMA\nto/from highmem (>1GB). According to IBM's paper @ OLS, this improves IO\nperformance by 40%\n\nBTW, Is this list archived on the website\n\nRegards, Yusuf\n-- \nYusuf Goolamabbas\[email protected]\n", "msg_date": "Fri, 27 Sep 2002 10:55:10 +0800", "msg_from": "Yusuf Goolamabbas <[email protected]>", "msg_from_op": true, "msg_subject": "Would ext3 data=journal help for Postgres synchronous io mode" } ]
[ { "msg_contents": "subscribe\n-- \nsecure email with gpg http://fortytwo.ch/gpg\n\nNOTICE: subkey signature! request key 92082481 from keyserver.kjsl.com\n", "msg_date": "27 Sep 2002 11:42:22 +0200", "msg_from": "Adrian von Bidder <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "When a table is created with a primary key it generates a index.\nDos the queries on that table use that index automatically?\nDo I need to reindex that index after insertions?\n\n\n\n\n\n\n\nWhen a table is created with a primary key it \ngenerates a index.\nDos the queries on that table use that index \nautomatically?\nDo I need to reindex that index after \ninsertions?", "msg_date": "Sat, 28 Sep 2002 21:50:13 +0600", "msg_from": "\"Waruna Geekiyanage\" <[email protected]>", "msg_from_op": true, "msg_subject": "INDEX" }, { "msg_contents": "On Sat, Sep 28, 2002 at 09:50:13PM +0600, Waruna Geekiyanage wrote:\n> When a table is created with a primary key it generates a index.\n> Dos the queries on that table use that index automatically?\n\nOnly if you analyse the table, and it's a \"win\". See the various\npast discussion on -general, for instance, about index use, and the\nFAQ.\n\n> Do I need to reindex that index after insertions?\n\nNo, but you need to analyse.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 28 Sep 2002 15:13:18 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INDEX" } ]
[ { "msg_contents": "On Tue, 1 Oct 2002, Adam Siegel wrote:\n\n> I have a table that has about 200 rows in it. I have 2 other tables\n> that have about 300,000 rows each that reference the first table\n> through a foriegn key. I run a process that rebuilds these tables.\n> First I delete the rows in the large tables (takes about 30 seconds),\n> then I delete the the rows in the first table (takes about 5 minutes\n> !!!). Each of these are done in separate transactions.\n>\n> If I do a vacuum analyze on each of the large tables just after the\n> delete then deleting the rows from the first table takes just a second\n> or two. My guess is that postgres is still check the foriegn keys\n> from the first table to the others even though the records are deleted\n> in the larger tables. The vacuum cleans up the deleted records, so it\n> goes faster. Am I wrong. Any ideas?\n\nThat seems reasonable. It's still going to be doing some action on those\ntables and it's going to have to scan the tables in some case. It's wierd\nthat it's taking that long to do it in any case however, what does the\nschema for the tables look like?\n\n\n\n", "msg_date": "Tue, 1 Oct 2002 11:27:56 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Deletes from tables with foreign keys taking" }, { "msg_contents": "I have a table that has about 200 rows in it. I have 2 other tables that have about 300,000 rows each that reference the first table through a foriegn key. I run a process that rebuilds these tables. First I delete the rows in the large tables (takes about 30 seconds), then I delete the the rows in the first table (takes about 5 minutes !!!). Each of these are done in separate transactions.\n\nIf I do a vacuum analyze on each of the large tables just after the delete then deleting the rows from the first table takes just a second or two. My guess is that postgres is still check the foriegn keys from the first table to the others even though the records are deleted in the larger tables. The vacuum cleans up the deleted records, so it goes faster. Am I wrong. Any ideas?\n\nRegards,\nAdam\n\n\n\n\n\n\n\n\n\nI have a table that has about 200 rows in it.  \nI have 2 other tables that have about 300,000 rows each that reference the first \ntable through a foriegn key.  I run a process that rebuilds these \ntables.  First I delete the rows in the large tables (takes about 30 \nseconds), then I delete the the rows in the first table (takes about 5 minutes \n!!!).  Each of these are done in separate transactions.\n \nIf I do a vacuum analyze on each of the large \ntables just after the delete then deleting the rows from the first table takes \njust a second or two.  My guess is that postgres is still check the foriegn \nkeys from the first table to the others even though the records are deleted in \nthe larger tables.  The vacuum cleans up the deleted records, so it goes \nfaster.  Am I wrong.  Any ideas?\n \nRegards,\nAdam", "msg_date": "Tue, 1 Oct 2002 14:28:04 -0400", "msg_from": "\"Adam Siegel\" <[email protected]>", "msg_from_op": false, "msg_subject": "Deletes from tables with foreign keys taking too long" }, { "msg_contents": "Adam,\n\n> I have a table that has about 200 rows in it. I have 2 other tables that\n> have about 300,000 rows each that reference the first table through a\n> foriegn key. I run a process that rebuilds these tables. First I delete\n> the rows in the large tables (takes about 30 seconds), then I delete the\n> the rows in the first table (takes about 5 minutes !!!). Each of these are\n> done in separate transactions.\n\nNot that this answers your performance questions, but you will be able to do \nthis faster if you use TRUNCATE instead of DELETE.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 1 Oct 2002 14:44:05 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Deletes from tables with foreign keys taking\n\ttoo long" } ]
[ { "msg_contents": "Relative performance question:\n\nI have 2 UPDATE queires in a function. \n\ntable_a: 117,000 records\ntable_b: 117,000 records\ntable_c: 1.5 million records\n\n #1 updates table_a, field_2 from table_b, field_1 based on a joining field_3. \nAround 110,000 updates\n#2 updates table_a, field_5 from table_c, field_2 joining on field_3. \nAround 110,000 updates.\n\n#1 takes 5-7 minutes; #2 takes about 15 seconds. The only difference I can \ndiscern is that table_a, field_2 is indexed and table_a, field_5 is not.\n\nIs it reasonable that updating the index would actually make the query take \n20x longer? If not, I'll post actual table defs and query statements.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 1 Oct 2002 14:51:29 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Comparitive UPDATE speed" }, { "msg_contents": "On Tue, 2002-10-01 at 16:51, Josh Berkus wrote:\n> Relative performance question:\n> \n> I have 2 UPDATE queires in a function. \n> \n> table_a: 117,000 records\n> table_b: 117,000 records\n> table_c: 1.5 million records\n> \n> #1 updates table_a, field_2 from table_b, field_1 based on a joining field_3. \n> Around 110,000 updates\n> #2 updates table_a, field_5 from table_c, field_2 joining on field_3. \n> Around 110,000 updates.\n> \n> #1 takes 5-7 minutes; #2 takes about 15 seconds. The only difference I can \n> discern is that table_a, field_2 is indexed and table_a, field_5 is not.\n> \n> Is it reasonable that updating the index would actually make the query take \n> 20x longer? If not, I'll post actual table defs and query statements.\n\nAbsolutely. You are doing lots of extra work. \n\nFor each of the 110,000 updates, you are deleting a leaf node from one\npart of the index tree and then inserting it into another part of the\ntree.\n\nIt will get even worse as you add more rows to table_a, since the\nindex tree will get deeper, and more work work must be done during\neach insert and delete.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"What other evidence do you have that they are terrorists, |\n| other than that they trained in these camps?\" |\n| 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n| men arrested near Buffalo NY |\n+------------------------------------------------------------+\n\n", "msg_date": "02 Oct 2002 05:48:01 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" } ]
[ { "msg_contents": "\nRandy,\n\n> I'm not sure about 20 times longer but you would have index records\n> that \n> would need to be changed. Is field_3 indexed in all 3 tables? If\n> table_b \n> does not have an index on field_3 and the other tables do, I'd guess\n> that \n> would make this take longer too.\n\nYeah, they're indexed. I'm going to try the updates without the index\non field_2 tonight.\n\n-Josh Berkus\n", "msg_date": "Tue, 01 Oct 2002 16:50:21 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparitive UPDATE speed" } ]
[ { "msg_contents": "Hi,\n\nToday we concluded test for database performance. Attached are results and the \nschema, for those who have missed earlier discussion on this.\n\nWe have (almost) decided that we will partition the data across machines. The \ntheme is, after every some short interval a burst of data will be entered in \nnew table in database, indexed and vacuume. The table(s) will be inherited so \nthat query on base table will fetch results from all the children. The \napplication has to consolidate all the data per node basis. If the database is \nnot postgresql, app. has to consolidate data across partitions as well.\n\nNow we need to investigate whether selecting on base table to include children \nwould use indexes created on children table.\n\nIt's estimated that when entire data is gathered, total number of children \ntables would be around 1K-1.1K across all machines. \n\nThis is in point of average rate of data insertion i.e. 5K records/sec and \ntotal data size, estimated to be 9 billion rows max i.e. estimated database \nsize is 900GB. Obviously it's impossible to keep insertion rate on an indexed \ntable high as data grows. So partitioning/inheritance looks better approach. \n\nPostgresql is not the final winner as yet. Mysql is in close range. I will keep \nyou guys posted about the result.\n\nLet me know about any comments..\n\nBye\n Shridhar\n\n--\nPrice's Advice:\tIt's all a game -- play it to have fun.\n\n\n\nMachine \t\t\t\t\t\t\t\t\nCompaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n\"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\nPerformance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n\t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n\t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n\t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\nComplete Data\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\nInserts + building a composite index\t\t\t\t\t\t\t\t\n\"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n\"about 100 bytes each, schema on \n'schema' sheet\"\t\t\t\t\t\t\t\t\n\"composite index on 3 fields \n(esn, min, datetime)\"\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\nLoad Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n\t\t\t\t\t\t\nDatabase Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n\t\t\t\t\t\t\nAverage per partition\t\t\t\t\t\t\n\t\t\t\t\t\t\nInserts + building a composite index\t\t\t\t\t\t\n\"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n\"about 100 bytes each, schema on \n'schema' sheet\"\t\t\t\t\t\t\n\"composite index on 3 fields \n(esn, min, datetime)\"\t\t\t\t\t\t\n\t\t\t\t\t\t\nSelect Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\nbased on equality match of 2 fields\t\t\t\t\t\t\n(esn and min) - 4 concurrent queries \nrunning\n\t\t\t\t\t\t\nDatabase Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n\nField Name\tField Type\tNullable\tIndexed\ntype\t\tint\t\tno\t\tno\nesn\t\tchar (10)\tno\t\tyes\nmin\t\tchar (10)\tno\t\tyes\ndatetime\ttimestamp\tno\t\tyes\nopc0\t\tchar (3)\tno\t\tno\nopc1\t\tchar (3)\tno\t\tno\nopc2\t\tchar (3)\tno\t\tno\ndpc0\t\tchar (3)\tno\t\tno\ndpc1\t\tchar (3)\tno\t\tno\ndpc2\t\tchar (3)\tno\t\tno\nnpa\t\tchar (3)\tno\t\tno\nnxx\t\tchar (3)\tno\t\tno\nrest\t\tchar (4)\tno\t\tno\nfield0\t\tint\t\tyes\t\tno\nfield1\t\tchar (4)\tyes\t\tno\nfield2\t\tint\t\tyes\t\tno\nfield3\t\tchar (4)\tyes\t\tno\nfield4\t\tint\t\tyes\t\tno\nfield5\t\tchar (4)\tyes\t\tno\nfield6\t\tint\t\tyes\t\tno\nfield7\t\tchar (4)\tyes\t\tno\nfield8\t\tint\t\tyes\t\tno\nfield9\t\tchar (4)\tyes\t\tno", "msg_date": "Thu, 03 Oct 2002 18:06:10 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large databases, performance" }, { "msg_contents": "Can you comment on the tools you are using to do the insertions (Perl, \nJava?) and the distribution of data (all random, all static), and the \ntransaction scope (all inserts in one transaction, each insert as a \nsingle transaction, some group of inserts as a transaction).\n\nI'd be curious what happens when you submit more queries than you have \nprocessors (you had four concurrent queries and four CPUs), if you care \nto run any additional tests. Also, I'd report the query time in \nabsolute (like you did) and also in 'Time/number of concurrent queries\". \n This will give you a sense of how the system is scaling as the workload \nincreases. Personally I am more concerned about this aspect than the \nload time, since I am going to guess that this is where all the time is \nspent. \n\nWas the original posting on GENERAL or HACKERS. Is this moving the \nPERFORMANCE for follow-up? I'd like to follow this discussion and want \nto know if I should join another group?\n\nThanks,\n\nCharlie\n\nP.S. Anyone want to comment on their expectation for 'commercial' \ndatabases handling this load? I know that we cannot speak about \nspecific performance metrics on some products (licensing restrictions) \nbut I'd be curious if folks have seen some of the databases out there \nhandle these dataset sizes and respond resonably.\n\n\nShridhar Daithankar wrote:\n\n>Hi,\n>\n>Today we concluded test for database performance. Attached are results and the \n>schema, for those who have missed earlier discussion on this.\n>\n>We have (almost) decided that we will partition the data across machines. The \n>theme is, after every some short interval a burst of data will be entered in \n>new table in database, indexed and vacuume. The table(s) will be inherited so \n>that query on base table will fetch results from all the children. The \n>application has to consolidate all the data per node basis. If the database is \n>not postgresql, app. has to consolidate data across partitions as well.\n>\n>Now we need to investigate whether selecting on base table to include children \n>would use indexes created on children table.\n>\n>It's estimated that when entire data is gathered, total number of children \n>tables would be around 1K-1.1K across all machines. \n>\n>This is in point of average rate of data insertion i.e. 5K records/sec and \n>total data size, estimated to be 9 billion rows max i.e. estimated database \n>size is 900GB. Obviously it's impossible to keep insertion rate on an indexed \n>table high as data grows. So partitioning/inheritance looks better approach. \n>\n>Postgresql is not the final winner as yet. Mysql is in close range. I will keep \n>you guys posted about the result.\n>\n>Let me know about any comments..\n>\n>Bye\n> Shridhar\n>\n>--\n>Price's Advice:\tIt's all a game -- play it to have fun.\n>\n>\n> \n>\n>------------------------------------------------------------------------\n>\n>Machine \t\t\t\t\t\t\t\t\n>Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n>\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n>\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n>\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n>\"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\t\t\n>Performance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n>\t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n>\t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n>\t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\n>Complete Data\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\t\t\n>Inserts + building a composite index\t\t\t\t\t\t\t\t\n>\"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n>\"about 100 bytes each, schema on \n>'schema' sheet\"\t\t\t\t\t\t\t\t\n>\"composite index on 3 fields \n>(esn, min, datetime)\"\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Load Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n>\t\t\t\t\t\t\n>Database Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n>\t\t\t\t\t\t\n>Average per partition\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Inserts + building a composite index\t\t\t\t\t\t\n>\"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n>\"about 100 bytes each, schema on \n>'schema' sheet\"\t\t\t\t\t\t\n>\"composite index on 3 fields \n>(esn, min, datetime)\"\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Select Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\n>based on equality match of 2 fields\t\t\t\t\t\t\n>(esn and min) - 4 concurrent queries \n>running\n>\t\t\t\t\t\t\n>Database Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n> \n>\n>------------------------------------------------------------------------\n>\n>Field Name\tField Type\tNullable\tIndexed\n>type\t\tint\t\tno\t\tno\n>esn\t\tchar (10)\tno\t\tyes\n>min\t\tchar (10)\tno\t\tyes\n>datetime\ttimestamp\tno\t\tyes\n>opc0\t\tchar (3)\tno\t\tno\n>opc1\t\tchar (3)\tno\t\tno\n>opc2\t\tchar (3)\tno\t\tno\n>dpc0\t\tchar (3)\tno\t\tno\n>dpc1\t\tchar (3)\tno\t\tno\n>dpc2\t\tchar (3)\tno\t\tno\n>npa\t\tchar (3)\tno\t\tno\n>nxx\t\tchar (3)\tno\t\tno\n>rest\t\tchar (4)\tno\t\tno\n>field0\t\tint\t\tyes\t\tno\n>field1\t\tchar (4)\tyes\t\tno\n>field2\t\tint\t\tyes\t\tno\n>field3\t\tchar (4)\tyes\t\tno\n>field4\t\tint\t\tyes\t\tno\n>field5\t\tchar (4)\tyes\t\tno\n>field6\t\tint\t\tyes\t\tno\n>field7\t\tchar (4)\tyes\t\tno\n>field8\t\tint\t\tyes\t\tno\n>field9\t\tchar (4)\tyes\t\tno\n>\n> \n>\n>------------------------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Thu, 03 Oct 2002 08:54:29 -0400", "msg_from": "\"Charles H. Woloszynski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\nShridhar,\n\nIt's one hell of a DB you're building. I'm sure I'm not the only one interested\nso to satisfy those of us who are nosey: can you say what the application is?\n\nI'm sure we'll all understand if it's not possible for you mention such\ninformation.\n\n\n--\nNigel J. Andrews\n\n\nOn Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Hi,\n> \n> Today we concluded test for database performance. Attached are results and the \n> schema, for those who have missed earlier discussion on this.\n> \n> We have (almost) decided that we will partition the data across machines. The \n> theme is, after every some short interval a burst of data will be entered in \n> new table in database, indexed and vacuume. The table(s) will be inherited so \n> that query on base table will fetch results from all the children. The \n> application has to consolidate all the data per node basis. If the database is \n> not postgresql, app. has to consolidate data across partitions as well.\n> \n> Now we need to investigate whether selecting on base table to include children \n> would use indexes created on children table.\n> \n> It's estimated that when entire data is gathered, total number of children \n> tables would be around 1K-1.1K across all machines. \n> \n> This is in point of average rate of data insertion i.e. 5K records/sec and \n> total data size, estimated to be 9 billion rows max i.e. estimated database \n> size is 900GB. Obviously it's impossible to keep insertion rate on an indexed \n> table high as data grows. So partitioning/inheritance looks better approach. \n> \n> Postgresql is not the final winner as yet. Mysql is in close range. I will keep \n> you guys posted about the result.\n> \n> Let me know about any comments..\n> \n> Bye\n> Shridhar\n\n", "msg_date": "Thu, 3 Oct 2002 13:56:03 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 13:56, Nigel J. Andrews wrote:\n> It's one hell of a DB you're building. I'm sure I'm not the only one interested\n> so to satisfy those of us who are nosey: can you say what the application is?\n> \n> I'm sure we'll all understand if it's not possible for you mention such\n> information.\n\nWell, I can't tell everything but somethings I can..\n\n1) This is a system that does not have online capability yet. This is an \nattempt to provide one.\n\n2) The goal is to avoid costs like licensing oracle. I am sure this would make \na great example for OSDB advocacy, which ever database wins..\n\n3) The database size estimates, I put earlier i.e. 9 billion tuples/900GB data \nsize, are in a fixed window. The data is generated from some real time systems. \nYou can imagine the rate.\n\n4) Further more there are timing restrictions attached to it. 5K inserts/sec. \n4800 queries per hour with response time of 10 sec. each. It's this aspect that \nhas forced us for partitioning..\n\nAnd contrary to my earlier information, this is going to be a live system \nrather than a back up one.. A better win to postgresql.. I hope it makes it.\n\nAnd BTW, all these results were on reiserfs. We didn't found much of difference \nin write performance between them. So we stick to reiserfs. And of course we \ngot the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \nover RHL7.2..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"Do you smell something burning or is it me?\"\t\t-- Joan of Arc\n\n", "msg_date": "Thu, 03 Oct 2002 19:33:30 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Forgive my ignorance, but what about 2.4.19-16 is that much faster? Are \nwe talking about 2x improvement for your tests? We are currently on \n2.4.9 and looking at the performance and wondering... so any comments \nare appreciated.\n\nCharlie\n\n\nShridhar Daithankar wrote:\n\n>And BTW, all these results were on reiserfs. We didn't found much of difference \n>in write performance between them. So we stick to reiserfs. And of course we \n>got the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \n>over RHL7.2..\n>\n>Bye\n> Shridhar\n>\n>--\n>QOTD:\t\"Do you smell something burning or is it me?\"\t\t-- Joan of Arc\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Thu, 03 Oct 2002 10:26:59 -0400", "msg_from": "\"Charles H. Woloszynski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 10:26, Charles H. Woloszynski wrote:\n\n> Forgive my ignorance, but what about 2.4.19-16 is that much faster? Are \n> we talking about 2x improvement for your tests? We are currently on \n> 2.4.9 and looking at the performance and wondering... so any comments \n> are appreciated.\n\nWell, for one thing, 2.4.19 contains backported O(1) scheduler patch which \nimproves SMP performance by heaps as task queue is per cpu rather than one per \nsystem. I don't think any system routinely runs thousands of processes unless \nit's a web/ftp/mail server. In that case improved scheduling wuld help as \nwell..\n\nBesides there were major VM rewrites/changes after 2.4.10 which corrected \nalmost all the major VM fiaskos on linux. For anything VM intensive it's \nrecommended that you run 2.4.17 at least.\n\nI would say it's worth going for it.\n\nBye\n Shridhar\n\n--\nSturgeon's Law:\t90% of everything is crud.\n\n", "msg_date": "Thu, 03 Oct 2002 21:20:16 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 19:33, Shridhar Daithankar wrote:\n\n> On 3 Oct 2002 at 13:56, Nigel J. Andrews wrote:\n> > It's one hell of a DB you're building. I'm sure I'm not the only one interested\n> > so to satisfy those of us who are nosey: can you say what the application is?\n> > \n> > I'm sure we'll all understand if it's not possible for you mention such\n> > information.\n> \n> Well, I can't tell everything but somethings I can..\n> \n> 1) This is a system that does not have online capability yet. This is an \n> attempt to provide one.\n> \n> 2) The goal is to avoid costs like licensing oracle. I am sure this would make \n> a great example for OSDB advocacy, which ever database wins..\n> \n> 3) The database size estimates, I put earlier i.e. 9 billion tuples/900GB data \n> size, are in a fixed window. The data is generated from some real time systems. \n> You can imagine the rate.\n\nRead that fixed time window..\n\n> \n> 4) Further more there are timing restrictions attached to it. 5K inserts/sec. \n> 4800 queries per hour with response time of 10 sec. each. It's this aspect that \n> has forced us for partitioning..\n> \n> And contrary to my earlier information, this is going to be a live system \n> rather than a back up one.. A better win to postgresql.. I hope it makes it.\n> \n> And BTW, all these results were on reiserfs. We didn't found much of difference \n> in write performance between them. So we stick to reiserfs. And of course we \n> got the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \n> over RHL7.2..\n\nWell, we were comparing ext3 v/s reiserfs. I don't remember the journalling \nmode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n0 from RAID-5 might have something to do about it.\n\nThere was a discussion on hackers some time back as in which file system is \nbetter. I hope this might have an addition over it..\n\n\nBye\n Shridhar\n\n--\n\t\"What terrible way to die.\"\t\"There are no good ways.\"\t\t-- Sulu and Kirk, \"That \nWhich Survives\", stardate unknown\n\n", "msg_date": "Thu, 03 Oct 2002 21:26:43 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "NOTE: Setting follow up to the performance list\n\nFunny that the status quo seems to be if you need fast selects on data\nthat has few inserts to pick mysql, otherwise if you have a lot of\ninserts and don't need super fast selects go with PostgreSQL; yet your\ndata seems to cut directly against this. \n\nI'm curious, did you happen to run the select tests while also running\nthe insert tests? IIRC the older mysql versions have to lock the table\nwhen doing the insert, so select performance goes in the dumper in that\nscenario, perhaps that's not an issue with 3.23.52? \n\nIt also seems like the vacuum after each insert is unnecessary, unless\nyour also deleting/updating data behind it. Perhaps just running an\nANALYZE on the table would suffice while reducing overhead.\n\nRobert Treat\n\nOn Thu, 2002-10-03 at 08:36, Shridhar Daithankar wrote:\n> Machine \t\t\t\t\t\t\t\t\n> Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n> \"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n> \"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n> \"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n> \"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\t\t\n> Performance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n> \t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n> \t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n> \t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\n> Complete Data\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\t\t\n> Inserts + building a composite index\t\t\t\t\t\t\t\t\n> \"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n> \"about 100 bytes each, schema on \n> 'schema' sheet\"\t\t\t\t\t\t\t\t\n> \"composite index on 3 fields \n> (esn, min, datetime)\"\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Load Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n> \t\t\t\t\t\t\n> Database Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n> \t\t\t\t\t\t\n> Average per partition\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Inserts + building a composite index\t\t\t\t\t\t\n> \"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n> \"about 100 bytes each, schema on \n> 'schema' sheet\"\t\t\t\t\t\t\n> \"composite index on 3 fields \n> (esn, min, datetime)\"\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Select Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\n> based on equality match of 2 fields\t\t\t\t\t\t\n> (esn and min) - 4 concurrent queries \n> running\n> \t\t\t\t\t\t\n> Database Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n> ----\n\n\n", "msg_date": "03 Oct 2002 11:57:29 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 8:54, Charles H. Woloszynski wrote:\n\n> Can you comment on the tools you are using to do the insertions (Perl, \n> Java?) and the distribution of data (all random, all static), and the \n> transaction scope (all inserts in one transaction, each insert as a \n> single transaction, some group of inserts as a transaction).\n\nMost proably it's all inserts in one transaction spread almost uniformly over \naround 15-20 tables. Of course there will be bunch of transactions..\n\n> I'd be curious what happens when you submit more queries than you have \n> processors (you had four concurrent queries and four CPUs), if you care \n> to run any additional tests. Also, I'd report the query time in \n> absolute (like you did) and also in 'Time/number of concurrent queries\". \n> This will give you a sense of how the system is scaling as the workload \n> increases. Personally I am more concerned about this aspect than the \n> load time, since I am going to guess that this is where all the time is \n> spent. \n\nI don't think so. Because we plan to put enough shared buffers that would \nalmost contain the indexes in RAM if not data. Besides number of tuples \nexpected per query are not many. So more concurrent queries are not going to \nhog anything other than CPU power at most.\n\nOur major concern remains load time as data is generated in real time and is \nexpecetd in database with in specified time period. We need indexes for query \nand inserting into indexed table is on hell of a job. We did attempt inserting \n8GB of data in indexed table. It took almost 20 hours at 1K tuples per second \non average.. Though impressive it's not acceptable for that load..\n> \n> Was the original posting on GENERAL or HACKERS. Is this moving the \n> PERFORMANCE for follow-up? I'd like to follow this discussion and want \n> to know if I should join another group?\n\nShall I subscribe to performance? What's the exat list name? Benchmarks? I \ndon't see anything as performance mailing list on this page..\nhttp://developer.postgresql.org/mailsub.php?devlp\n\n> P.S. Anyone want to comment on their expectation for 'commercial' \n> databases handling this load? I know that we cannot speak about \n> specific performance metrics on some products (licensing restrictions) \n> but I'd be curious if folks have seen some of the databases out there \n> handle these dataset sizes and respond resonably.\n\nWell, if something handles such kind of data with single machine and costs \nunder USD20K for entire setup, I would be willing to recommend that to client..\n\nBTW we are trying same test on HP-UX. I hope we get some better figures on 64 \nbit machines..\n\nBye\n Shridhar\n\n--\nClarke's Conclusion:\tNever let your sense of morals interfere with doing the \nright thing.\n\n", "msg_date": "Thu, 03 Oct 2002 21:37:55 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> > Was the original posting on GENERAL or HACKERS. Is this moving the\n> > PERFORMANCE for follow-up? I'd like to follow this discussion and want\n> > to know if I should join another group?\n> \n> Shall I subscribe to performance? What's the exat list name? Benchmarks? I\n> don't see anything as performance mailing list on this page..\n> http://developer.postgresql.org/mailsub.php?devlp\n\nIt's a fairly new mailing list. :)\n\[email protected]\n\nEasiest way to subscribe is by emailing [email protected] with:\n\nsubscribe pgsql-performance\n\nas the message body.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip> \n> Bye\n> Shridhar\n> \n> --\n> Clarke's Conclusion: Never let your sense of morals interfere with doing the\n> right thing.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 02:16:06 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 11:57, Robert Treat wrote:\n\n> NOTE: Setting follow up to the performance list\n> \n> Funny that the status quo seems to be if you need fast selects on data\n> that has few inserts to pick mysql, otherwise if you have a lot of\n> inserts and don't need super fast selects go with PostgreSQL; yet your\n> data seems to cut directly against this. \n\nWell, couple of things..\n\nThe number of inserts aren't few. it's 5000/sec.required in the field Secondly \nI don't know really but postgresql seems doing pretty fine in parallel selects. \nIf we use mysql with transaction support then numbers are really close..\n\nMay be it's time to rewrite famous myth that postgresql is slow. When properly \ntuned or given enough head room, it's almost as fast as mysql..\n\n> I'm curious, did you happen to run the select tests while also running\n> the insert tests? IIRC the older mysql versions have to lock the table\n> when doing the insert, so select performance goes in the dumper in that\n> scenario, perhaps that's not an issue with 3.23.52? \n\nIMO even if it locks tables that shouldn't affect select performance. It would \nbe fun to watch when we insert multiple chunks of data and fire queries \nconcurrently. I would be surprised if mysql starts slowing down..\n\n> It also seems like the vacuum after each insert is unnecessary, unless\n> your also deleting/updating data behind it. Perhaps just running an\n> ANALYZE on the table would suffice while reducing overhead.\n\nI believe that was vacuum analyze only. But still it takes lot of time. Good \nthing is it's not blocking..\n\nAnyway I don't think such frequent vacuums are going to convince planner to \nchoose index scan over sequential scan. I am sure it's already convinced..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:[email protected]\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Thu, 03 Oct 2002 21:47:03 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 10:56, Shridhar Daithankar wrote:\n> Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling \n> mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> 0 from RAID-5 might have something to do about it.\n> \n> There was a discussion on hackers some time back as in which file system is \n> better. I hope this might have an addition over it..\n\n\nHmm. Reiserfs' claim to fame is it's low latency with many, many small\nfiles and that it's journaled. I've never seem anyone comment about it\nbeing considered an extremely fast file system in an general computing\ncontext nor have I seen any even hint at it as a file system for use in\nheavy I/O databases. This is why Reiserfs is popular with news and\nsquid cache servers as it's almost an ideal fit. That is, tons of small\nfiles or directories contained within a single directory. As such, I'm\nvery surprised that reiserfs is even in the running for your comparison.\n\nMight I point you toward XFS, JFS, or ext3, ? As I understand it, XFS\nand JFS are going to be your preferred file systems for for this type of\napplication with XFS in the lead as it's tool suite is very rich and\nrobust. I'm actually lacking JFS experience but from what I've read,\nit's a notch or two back from XFS in robustness (assuming we are talking\nLinux here). Feel free to read and play to find out for your self. I'd\nrecommend that you start playing with XFS to see how the others\ncompare. After all, XFS' specific claim to fame is high throughput w/\nlow latency on large and very large files. Furthermore, they even have\na real time mechanism that you can further play with to see how it\neffects your throughput and/or latencies.\n\nGreg", "msg_date": "03 Oct 2002 11:23:28 -0500", "msg_from": "Greg Copeland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 12:17, Shridhar Daithankar wrote:\n> On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> May be it's time to rewrite famous myth that postgresql is slow. \n\nThat myth has been dis-proven long ago, it just takes awhile for\neveryone to catch on ;-)\n\nWhen properly \n> tuned or given enough head room, it's almost as fast as mysql..\n> \n> > I'm curious, did you happen to run the select tests while also running\n> > the insert tests? IIRC the older mysql versions have to lock the table\n> > when doing the insert, so select performance goes in the dumper in that\n> > scenario, perhaps that's not an issue with 3.23.52? \n> \n> IMO even if it locks tables that shouldn't affect select performance. It would \n> be fun to watch when we insert multiple chunks of data and fire queries \n> concurrently. I would be surprised if mysql starts slowing down..\n> \n\nHmm... been awhile since I dug into mysql internals, but IIRC once the\ntable was locked, you had to wait for the insert to complete so the\ntable would be unlocked and the select could go through. (maybe this is\na myth that I need to get clued in on)\n\n> > It also seems like the vacuum after each insert is unnecessary, unless\n> > your also deleting/updating data behind it. Perhaps just running an\n> > ANALYZE on the table would suffice while reducing overhead.\n> \n> I believe that was vacuum analyze only. But still it takes lot of time. Good \n> thing is it's not blocking..\n> \n> Anyway I don't think such frequent vacuums are going to convince planner to \n> choose index scan over sequential scan. I am sure it's already convinced..\n> \n\nMy thinking was that if your just doing inserts, you need to update the\nstatistics but don't need to check on unused tuples. \n\nRobert Treat\n\n", "msg_date": "03 Oct 2002 12:26:34 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 11:23, Greg Copeland wrote:\n\n> On Thu, 2002-10-03 at 10:56, Shridhar Daithankar wrote:\n> > Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling \n> > mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> > 0 from RAID-5 might have something to do about it.\n> > \n> > There was a discussion on hackers some time back as in which file system is \n> > better. I hope this might have an addition over it..\n> \n> \n> Hmm. Reiserfs' claim to fame is it's low latency with many, many small\n> files and that it's journaled. I've never seem anyone comment about it\n> being considered an extremely fast file system in an general computing\n> context nor have I seen any even hint at it as a file system for use in\n> heavy I/O databases. This is why Reiserfs is popular with news and\n> squid cache servers as it's almost an ideal fit. That is, tons of small\n> files or directories contained within a single directory. As such, I'm\n> very surprised that reiserfs is even in the running for your comparison.\n> \n> Might I point you toward XFS, JFS, or ext3, ? As I understand it, XFS\n> and JFS are going to be your preferred file systems for for this type of\n> application with XFS in the lead as it's tool suite is very rich and\n> robust. I'm actually lacking JFS experience but from what I've read,\n> it's a notch or two back from XFS in robustness (assuming we are talking\n> Linux here). Feel free to read and play to find out for your self. I'd\n> recommend that you start playing with XFS to see how the others\n> compare. After all, XFS' specific claim to fame is high throughput w/\n> low latency on large and very large files. Furthermore, they even have\n> a real time mechanism that you can further play with to see how it\n> effects your throughput and/or latencies.\n\nI would try that. Once we are thr. with tests at our hands..\n\nBye\n Shridhar\n\n--\n\t\"The combination of a number of things to make existence worthwhile.\"\t\"Yes, \nthe philosophy of 'none,' meaning 'all.'\"\t\t-- Spock and Lincoln, \"The Savage \nCurtain\", stardate 5906.4\n\n", "msg_date": "Thu, 03 Oct 2002 22:00:18 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 12:26, Robert Treat wrote:\n\n> On Thu, 2002-10-03 at 12:17, Shridhar Daithankar wrote:\n> > On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> > May be it's time to rewrite famous myth that postgresql is slow. \n> \n> That myth has been dis-proven long ago, it just takes awhile for\n> everyone to catch on ;-)\n\n:-)\n\n> Hmm... been awhile since I dug into mysql internals, but IIRC once the\n> table was locked, you had to wait for the insert to complete so the\n> table would be unlocked and the select could go through. (maybe this is\n> a myth that I need to get clued in on)\n\nIf that turns out to be true, I guess mysql will nose dive out of window.. May \nbe time to run a test that's nearer to real world expectation, especially in \nterms on concurrency..\n\nI don't think tat will be an issue with mysql with transaction support. The \nvanilla one might suffer.. Not the other one.. At least theoretically..\n\n> My thinking was that if your just doing inserts, you need to update the\n> statistics but don't need to check on unused tuples. \n\nAny other way of doing that other than vacuum analyze? I thought that was the \nonly way..\n\nBye\n Shridhar\n\n--\n\"Even more amazing was the realization that God has Internet access. Iwonder \nif He has a full newsfeed?\"(By Matt Welsh)\n\n", "msg_date": "Thu, 03 Oct 2002 22:05:24 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 03 Oct 2002 18:06:10 +0530, \"Shridhar Daithankar\"\n<[email protected]> wrote:\n>Machine \t\t\t\t\t\t\t\t\n>Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n>\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n>\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n>\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\n\nShridhar,\n\nforgive me if I ask what has been said before: Did you run at 100%\nCPU or was IO bandwidth your limit? And is the answer the same for\nall three configurations?\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 18:44:09 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Shridhar Daithankar wrote:\n\n>On 3 Oct 2002 at 11:57, Robert Treat wrote:\n>\n> \n>\n>>NOTE: Setting follow up to the performance list\n>>\n>>Funny that the status quo seems to be if you need fast selects on data\n>>that has few inserts to pick mysql, otherwise if you have a lot of\n>>inserts and don't need super fast selects go with PostgreSQL; yet your\n>>data seems to cut directly against this. \n>> \n>>\n>\n>Well, couple of things..\n>\n>The number of inserts aren't few. it's 5000/sec.required in the field Secondly \n>I don't know really but postgresql seems doing pretty fine in parallel selects. \n>If we use mysql with transaction support then numbers are really close..\n>\n>May be it's time to rewrite famous myth that postgresql is slow. When properly \n>tuned or given enough head room, it's almost as fast as mysql..\n> \n>\n\nIn the case of concurrent transactions MySQL does not do as well due to \nvery bad locking behavious. PostgreSQL is far better because it does row \nlevel locking instead of table locking.\nIf you have many concurrent transactions MySQL performs some sort of \n\"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \nthat the database does not block.\n\n\n>>I'm curious, did you happen to run the select tests while also running\n>>the insert tests? IIRC the older mysql versions have to lock the table\n>>when doing the insert, so select performance goes in the dumper in that\n>>scenario, perhaps that's not an issue with 3.23.52? \n>> \n>>\n>\n>IMO even if it locks tables that shouldn't affect select performance. It would \n>be fun to watch when we insert multiple chunks of data and fire queries \n>concurrently. I would be surprised if mysql starts slowing down..\n> \n>\n\nIn the case of concurrent SELECTs and INSERT/UPDATE/DELETE operations \nMySQL will slow down for sure. The more concurrent transactions you have \nthe worse MySQL will be.\n\n>>It also seems like the vacuum after each insert is unnecessary, unless\n>>your also deleting/updating data behind it. Perhaps just running an\n>>ANALYZE on the table would suffice while reducing overhead.\n>> \n>>\n>\n>I believe that was vacuum analyze only. But still it takes lot of time. Good \n>thing is it's not blocking..\n>\n>Anyway I don't think such frequent vacuums are going to convince planner to \n>choose index scan over sequential scan. I am sure it's already convinced..\n> \n>\n\nPostgreSQL allows you to improve execution plans by giving the planner a \nhint.\nIn addition to that: if you need REAL performance and if you are running \nsimilar queries consider using SPI.\n\nAlso: 7.3 will support PREPARE/EXECUTE.\n\nIf you are running MySQL you will not be able to add features to the \ndatabase easily.\nIn the case of PostgreSQL you have a broad range of simple interfaces \nwhich make many things pretty simple (eg. optimized data types in < 50 \nlines of C code).\n\nPostgreSQL is the database of the future and you can perform a lot of \ntuning.\nMySQL is a simple frontend to a filesystem and it is fast as long as you \nare doing SELECT 1+1 operations.\n\nAlso: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \nbuilt on Monty Widenius and the core team = Monty.\nAlso: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n\nIn the past few years I have seen that there is no database system which \ncan beat PostgreSQL's flexibility and stability.\nI am familiar with various database systems but believe: PostgreSQL is \nthe best choice.\n\n Hans\n\n\n>Regards,\n> Shridhar\n>\n>-----------------------------------------------------------\n>Shridhar Daithankar\n>LIMS CPE Team Member, PSPL.\n>mailto:[email protected]\n>Phone:- +91-20-5678900 Extn.270\n>Fax :- +91-20-5678901 \n>-----------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n> \n>\n\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Thu, 03 Oct 2002 18:51:05 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 03 Oct 2002 21:47:03 +0530, \"Shridhar Daithankar\"\n<[email protected]> wrote:\n>I believe that was vacuum analyze only.\n\nWell there is\n\n\tVACUUM [tablename]; \n\nand there is\n\n\tANALYZE [tablename];\n\nAnd\n\n\tVACUUM ANALYZE [tablename];\n\nis VACUUM followed by ANALYZE.\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 18:53:32 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 11:17, Shridhar Daithankar wrote:\n> On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> \n[snip]\n> > I'm curious, did you happen to run the select tests while also running\n> > the insert tests? IIRC the older mysql versions have to lock the table\n> > when doing the insert, so select performance goes in the dumper in that\n> > scenario, perhaps that's not an issue with 3.23.52? \n> \n> IMO even if it locks tables that shouldn't affect select performance. It would \n> be fun to watch when we insert multiple chunks of data and fire queries \n> concurrently. I would be surprised if mysql starts slowing down..\n\nWhat kind of lock? Shared lock or exclusive lock? If SELECT\nperformance tanked when doing simultaneous INSERTs, then maybe there\nwere exclusive table locks.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"What other evidence do you have that they are terrorists, |\n| other than that they trained in these camps?\" |\n| 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n| men arrested near Buffalo NY |\n+------------------------------------------------------------+\n\n", "msg_date": "03 Oct 2002 12:38:49 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 11:51, Hans-Jürgen Schönig wrote:\n> Shridhar Daithankar wrote:\n> \n> >On 3 Oct 2002 at 11:57, Robert Treat wrote:\n[snip]\n> PostgreSQL allows you to improve execution plans by giving the planner a \n> hint.\n> In addition to that: if you need REAL performance and if you are running \n> similar queries consider using SPI.\n\nWhat is SPI?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"What other evidence do you have that they are terrorists, |\n| other than that they trained in these camps?\" |\n| 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n| men arrested near Buffalo NY |\n+------------------------------------------------------------+\n\n", "msg_date": "03 Oct 2002 15:55:35 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, Oct 03, 2002 at 06:51:05PM +0200, Hans-J?rgen Sch?nig wrote:\n\n> In the case of concurrent transactions MySQL does not do as well due to \n> very bad locking behavious. PostgreSQL is far better because it does row \n> level locking instead of table locking.\n\nIt is my understanding that MySQL no longer does this on InnoDB\ntables. Whether various bag-on-the-side table types are a good thing\nI will leave to others; but there's no reason to go 'round making\nclaims about old versions of MySQL any more than there is a reason to\ncontinue to talk about PostgreSQL not being crash safe. MySQL has\nmoved along nearly as quickly as PostgreSQL. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Oct 2002 17:09:20 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "May I suggest that instead of [pgsql-performance] that [PERF] be used to\nsave some of the subject line.\n\nRon Johnson wrote:\n> \n> On Thu, 2002-10-03 at 11:51, Hans-Jürgen Schönig wrote:\n> > Shridhar Daithankar wrote:\n> >\n> > >On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> [snip]\n> > PostgreSQL allows you to improve execution plans by giving the planner a\n> > hint.\n> > In addition to that: if you need REAL performance and if you are running\n> > similar queries consider using SPI.\n> \n> What is SPI?\n> \n> --\n> +------------------------------------------------------------+\n> | Ron Johnson, Jr. mailto:[email protected] |\n> | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> | |\n> | \"What other evidence do you have that they are terrorists, |\n> | other than that they trained in these camps?\" |\n> | 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n> | men arrested near Buffalo NY |\n> +------------------------------------------------------------+\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 03 Oct 2002 17:12:02 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "use [PERF] instead of " }, { "msg_contents": "On 3 Oct 2002 at 18:53, Manfred Koizar wrote:\n\n> On Thu, 03 Oct 2002 21:47:03 +0530, \"Shridhar Daithankar\"\n> <[email protected]> wrote:\n> >I believe that was vacuum analyze only.\n> \n> Well there is\n> \n> \tVACUUM [tablename]; \n> \n> and there is\n> \n> \tANALYZE [tablename];\n> \n> And\n> \n> \tVACUUM ANALYZE [tablename];\n> \n> is VACUUM followed by ANALYZE.\n\nI was using vacuum analyze. \n\nGood that you pointed out. Now I will modify the postgresql auto vacuum daemon \nthat I wrote to analyze only in case of excesive inserts. I hope that's lighter \non performance compared to vacuum analyze..\n\nBye\n Shridhar\n\n--\nMix's Law:\tThere is nothing more permanent than a temporary building.\tThere is \nnothing more permanent than a temporary tax.\n\n", "msg_date": "Fri, 04 Oct 2002 13:30:54 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 3 Oct 2002, Hans-J�rgen Sch�nig wrote:\n\n> In the case of concurrent transactions MySQL does not do as well due to \n> very bad locking behavious. PostgreSQL is far better because it does row \n> level locking instead of table locking.\n> If you have many concurrent transactions MySQL performs some sort of \n> \"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \n> that the database does not block.\n\nWhile I'm no big fan of MySQL, I must point out that with innodb tables, \nthe locking is row level, and the ability to handle parallel read / write \nis much improved.\n\nAlso, Postgresql does NOT use row level locking, it uses MVCC, which is \n\"better than row level locking\" as Tom puts it.\n\nOf course, hot backup is only 2,000 Euros for an innodb table mysql, while \nhot backup for postgresql is free. :-)\n\nThat said, MySQL still doesn't handle parallel load nearly as well as \npostgresql, it's just better than it once was.\n\n> Also: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \n> built on Monty Widenius and the core team = Monty.\n> Also: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n\nThis is a very valid point. The \"committee\" that creates and steers \nPostgresql is very much a meritocracy. The \"committee\" that steers MySQL \nis Monty. \n\nI'm much happier knowing that every time something important needs to be \ndone we have a whole cupboard full of curmudgeons arguing the fine points \nso that the \"right thing\" gets done.\n\n\n", "msg_date": "Fri, 4 Oct 2002 10:05:10 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "MVCC = great ...\nI know that is not row level locking but that's the way things can be \nexplained more easily. Many people are asking my how things work and \nthis way it is easier to understand. Never tell a trainee about deadlock \ndetection and co *g*.\n\nI am happy that the PostgreSQL core team + all developers are not like \nMonty ...\nI am happy to PostgreSQL has developers such as Bruce, Tom, Jan, Marc, \nVadim, Joe, Neil, Christopher, etc. (just to name a few) ...\n\nYes, it is said to be better than it was but that's not the point:\nMySQL = Monty SQL <> ANSI SQL ...\n\nBelieve me, the table will turn and finally the better system will succeed.\nOne we have clustering, PITR, etc. running people will see how real \ndatabases work :).\n\n Hans\n\n\n\nscott.marlowe wrote:\n\n>On Thu, 3 Oct 2002, Hans-J�rgen Sch�nig wrote:\n>\n> \n>\n>>In the case of concurrent transactions MySQL does not do as well due to \n>>very bad locking behavious. PostgreSQL is far better because it does row \n>>level locking instead of table locking.\n>>If you have many concurrent transactions MySQL performs some sort of \n>>\"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \n>>that the database does not block.\n>> \n>>\n>\n>While I'm no big fan of MySQL, I must point out that with innodb tables, \n>the locking is row level, and the ability to handle parallel read / write \n>is much improved.\n>\n>Also, Postgresql does NOT use row level locking, it uses MVCC, which is \n>\"better than row level locking\" as Tom puts it.\n>\n>Of course, hot backup is only 2,000 Euros for an innodb table mysql, while \n>hot backup for postgresql is free. :-)\n>\n>That said, MySQL still doesn't handle parallel load nearly as well as \n>postgresql, it's just better than it once was.\n>\n> \n>\n>>Also: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \n>>built on Monty Widenius and the core team = Monty.\n>>Also: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n>> \n>>\n>\n>This is a very valid point. The \"committee\" that creates and steers \n>Postgresql is very much a meritocracy. The \"committee\" that steers MySQL \n>is Monty. \n>\n>I'm much happier knowing that every time something important needs to be \n>done we have a whole cupboard full of curmudgeons arguing the fine points \n>so that the \"right thing\" gets done.\n> \n>\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 18:30:47 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "Andrew Sullivan <[email protected]> wrote:\n\n> On Thu, Oct 03, 2002 at 06:51:05PM +0200, Hans-J?rgen Sch?nig wrote:\n>\n> > In the case of concurrent transactions MySQL does not do as well due to\n> > very bad locking behavious. PostgreSQL is far better because it does row\n> > level locking instead of table locking.\n>\n> It is my understanding that MySQL no longer does this on InnoDB\n> tables. Whether various bag-on-the-side table types are a good thing\n> I will leave to others; but there's no reason to go 'round making\n> claims about old versions of MySQL any more than there is a reason to\n> continue to talk about PostgreSQL not being crash safe. MySQL has\n> moved along nearly as quickly as PostgreSQL.\n\nLocking and transactions is not fine in MySQL (with InnoDB) though. I tried\nto do selects on a table I was concurrently inserting to. In a single thread\nI was constantly inserting 1000 rows per transaction. While inserting I did\nsome random selects on the same table. It often happend that the insert\ntransactions were aborted due to dead lock problems. There I see the problem\nwith locking reads.\nI like PostgreSQL's MVCC!\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Fri, 4 Oct 2002 18:38:21 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\nIn Oracle you can Pin large objects into memory to prevent frequent\nreloads. Is there anyway to do this with Postgres? It appears that some\nof our tables that get hit a lot may get kicked out of memory when we\naccess some of our huge tables. Then they have to wait for I/O to get\nloaded back in. \n\n\nDavid Blood\nMatraex, Inc\n\n\n\n", "msg_date": "Fri, 4 Oct 2002 10:46:57 -0600", "msg_from": "\"David Blood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Pinning a table into memory" }, { "msg_contents": "\"David Blood\" <[email protected]> writes:\n> In Oracle you can Pin large objects into memory to prevent frequent\n> reloads. Is there anyway to do this with Postgres?\n\nI can never understand why people think this would be a good idea.\nIf you're hitting a table frequently, it will stay in memory anyway\n(either in Postgres shared buffers or kernel disk cache). If you're\nnot hitting it frequently enough to keep it swapped in, then whatever\nis getting swapped in instead is probably a better candidate to be\noccupying the space. ISTM that a manual \"pin this table\" knob would\nmostly have the effect of making performance worse, whenever the\nsystem activity is slightly different from the situation you had in\nmind when you installed the pin.\n\nHaving said that, I'll freely concede that our cache management\nalgorithms could use improvement (and there are people looking at\nthat right now). But a manual pin doesn't seem like a better answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 14:47:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pinning a table into memory " }, { "msg_contents": "On Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling\n> mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> 0 from RAID-5 might have something to do about it.\n\nThat will have a massive, massive effect on performance. Depending on\nyour RAID subsystem, you can except RAID-0 to be between two and twenty\ntimes as fast for writes as RAID-5.\n\nIf you compared one filesystem on RAID-5 and another on RAID-0,\nyour results are likely not at all indicative of file system\nperformance.\n\nNote that I've redirected followups to the pgsql-performance list.\nAvoiding cross-posting would be nice, since I am getting lots of\nduplicate messages these days.\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 11:27:04 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Our major concern remains load time as data is generated in real time and is\n> expecetd in database with in specified time period.\n\nIf your time period is long enough, you can do what I do, which is\nto use partial indexes so that the portion of the data being loaded\nis not indexed. That will speed your loads quite a lot. Aftewards\nyou can either generate another partial index for the range you\nloaded, or generate a new index over both old and new data, and\nthen drop the old index.\n\nThe one trick is that the optimizer is not very smart about combining\nmultiple indexes, so you often need to split your queries across\nthe two \"partitions\" of the table that have separate indexes.\n\n> Shall I subscribe to performance?\n\nYes, you really ought to. The list is [email protected].\n\ncjs\n-- \nCurt Sampson <[email protected]> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 11:30:57 +0900 (JST)", "msg_from": "Curt Sampson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Curt Sampson <[email protected]> writes:\n> ... Avoiding cross-posting would be nice, since I am getting lots of\n> duplicate messages these days.\n\nCross-posting is a fact of life, and in fact encouraged, on the pg\nlists. I suggest adapting. Try sending\n\tset all unique your-email-address\nto the PG majordomo server; this sets you up to get only one copy\nof each cross-posted message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 23:20:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "cross-posts (was Re: Large databases, performance)" }, { "msg_contents": "On 3 Oct 2002 at 8:54, Charles H. Woloszynski wrote:\n\n> I'd be curious what happens when you submit more queries than you have \n> processors (you had four concurrent queries and four CPUs), if you care \n> to run any additional tests. Also, I'd report the query time in \n> absolute (like you did) and also in 'Time/number of concurrent queries\". \n> This will give you a sense of how the system is scaling as the workload \n> increases. Personally I am more concerned about this aspect than the \n> load time, since I am going to guess that this is where all the time is \n> spent. \n\nOK. I am back from my cave after some more tests are done. Here are the \nresults. I am not repeating large part of it but answering your questions..\n\nDon't ask me how these numbers changed. I am not the person who conducts the \ntest neither I have access to the system. Rest(or most ) of the things remains \nsame..\n\nMySQL 3.23.52 with innodb transaction support: \n\n4 concurrent queries \t:- 257.36 ms\n40 concurrent queries\t:- 35.12 ms\n\nPostgresql 7.2.2 \n\n4 concurrent queries \t\t:- 257.43 ms\n40 concurrent \tqueries\t\t:- 41.16 ms\n\nThough I can not report oracle numbers, suffice to say that they fall in \nbetween these two numbers.\n\nOracle seems to be hell lot faster than mysql/postgresql to load raw data even \nwhen it's installed on reiserfs. We plan to run XFS tests later in hope that \nthat would improve mysql/postgresql load times. \n\nIn this run postgresql has better load time than mysql/innodb ( 18270 sec v/s \n17031 sec.) Index creation times are faster as well (100 sec v/s 130 sec). \nDon't know what parameters are changed.\n\nOnly worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \nnumbers include indexes. This is really going to be a problem when things are \ndeployed. Any idea how can it be taken down? \n\nWAL is out, it's not counted.\n\nSchema optimisation is later issue. Right now all three databases are using \nsame schema..\n\nWill it help in this situation if I recompile posgresql with block size say 32K \nrather than 8K default? Will it saev some overhead and offer better performance \nin data load etc?\n\nWill keep you guys updated..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:[email protected]\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Mon, 07 Oct 2002 15:07:29 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "I wonder if the following changes make a difference:\n\n- compile PostgreSQL with CFLAGS=' -O3 '\n- redefine commit delays\n\nalso: keep in mind that you might gain a lot of performance by using the \nSPI if you are running many similar queries\n\ntry 7.3 - as far as I remeber there is a mechanism which caches recent \nexecution plans.\nalso: some overhead was reduced (tuples, backend startup).\n\n Hans\n\n\n>Ok. I am back from my cave after some more tests are done. Here are the \n>results. I am not repeating large part of it but answering your questions..\n>\n>Don't ask me how these numbers changed. I am not the person who conducts the \n>test neither I have access to the system. Rest(or most ) of the things remains \n>same..\n>\n>MySQL 3.23.52 with innodb transaction support: \n>\n>4 concurrent queries \t:- 257.36 ms\n>40 concurrent queries\t:- 35.12 ms\n>\n>Postgresql 7.2.2 \n>\n>4 concurrent queries \t\t:- 257.43 ms\n>40 concurrent \tqueries\t\t:- 41.16 ms\n>\n>Though I can not report oracle numbers, suffice to say that they fall in \n>between these two numbers.\n>\n>Oracle seems to be hell lot faster than mysql/postgresql to load raw data even \n>when it's installed on reiserfs. We plan to run XFS tests later in hope that \n>that would improve mysql/postgresql load times. \n>\n>In this run postgresql has better load time than mysql/innodb ( 18270 sec v/s \n>17031 sec.) Index creation times are faster as well (100 sec v/s 130 sec). \n>Don't know what parameters are changed.\n>\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n>numbers include indexes. This is really going to be a problem when things are \n>deployed. Any idea how can it be taken down? \n>\n>WAL is out, it's not counted.\n>\n>Schema optimisation is later issue. Right now all three databases are using \n>same schema..\n>\n>Will it help in this situation if I recompile posgresql with block size say 32K \n>rather than 8K default? Will it saev some overhead and offer better performance \n>in data load etc?\n>\n>Will keep you guys updated..\n>\n>Regards,\n> Shridhar\n>\n>-----------------------------------------------------------\n>Shridhar Daithankar\n>LIMS CPE Team Member, PSPL.\n>mailto:[email protected]\n>Phone:- +91-20-5678900 Extn.270\n>Fax :- +91-20-5678901 \n>-----------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 12:01:32 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance" }, { "msg_contents": "On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> Curt Sampson <[email protected]> writes:\n> > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > duplicate messages these days.\n> \n> Cross-posting is a fact of life, and in fact encouraged, on the pg\n> lists. I suggest adapting. Try sending\n> \tset all unique your-email-address\n> to the PG majordomo server; this sets you up to get only one copy\n> of each cross-posted message.\nThat doesn't seem to work any more:\n\n>>>> set all unique [email protected]\n**** The \"all\" mailing list is not supported at\n**** PostgreSQL User Support Lists.\n\nWhat do I need to send now? \n\nMarc? \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "07 Oct 2002 06:50:59 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "> On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> > Curt Sampson <[email protected]> writes:\n> > > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > > duplicate messages these days.\n> >\n> > Cross-posting is a fact of life, and in fact encouraged, on the pg\n> > lists. I suggest adapting. Try sending\n> > set all unique your-email-address\n> > to the PG majordomo server; this sets you up to get only one copy\n> > of each cross-posted message.\n> That doesn't seem to work any more:\n>\n> >>>> set all unique [email protected]\n> **** The \"all\" mailing list is not supported at\n> **** PostgreSQL User Support Lists.\n>\n> What do I need to send now?\n>\n> Marc?\n\nit is:\nset ALL unique your-email\n\nif you also don't want to get emails that have already been cc'd to you, you\ncan use:\n\nset ALL eliminatecc your-email\n\nfor a full list of set options send:\n\nhelp set\n\nto majordomo.\n\nRegards,\nMichael Paesold\n\n\n", "msg_date": "Mon, 7 Oct 2002 14:01:25 +0200", "msg_from": "\"Michael Paesold\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "On Mon, 2002-10-07 at 07:01, Michael Paesold wrote:\n> > On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> > > Curt Sampson <[email protected]> writes:\n> > > > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > > > duplicate messages these days.\n> > >\n> > > Cross-posting is a fact of life, and in fact encouraged, on the pg\n> > > lists. I suggest adapting. Try sending\n> > > set all unique your-email-address\n> > > to the PG majordomo server; this sets you up to get only one copy\n> > > of each cross-posted message.\n> > That doesn't seem to work any more:\n> >\n> > >>>> set all unique [email protected]\n> > **** The \"all\" mailing list is not supported at\n> > **** PostgreSQL User Support Lists.\n> >\n> > What do I need to send now?\n> >\n> > Marc?\n> \n> it is:\n> set ALL unique your-email\n> \n> if you also don't want to get emails that have already been cc'd to you, you\n> can use:\n> \n> set ALL eliminatecc your-email\n> \n> for a full list of set options send:\n> \n> help set\n> \n> to majordomo.\nThanks. That worked great. (I use Mailman, and didn't realize the ALL\nneeded to be capitalized. \n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "07 Oct 2002 07:04:33 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n<[email protected]> wrote:\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n>numbers include indexes. This is really going to be a problem when things are \n>deployed. Any idea how can it be taken down? \n\nShridhar,\n\nif i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\ninteger specifying the length followed by as many characters as the\nlength tells. On 32-bit Intel hardware this structure is aligned on a\n4-byte boundary.\n\nFor your row layout this gives the following sizes (look at the \"phys\nsize\" column):\n\n| Field Field Null Indexed phys mini\n| Name Type size \n|--------------------------------------------\n| type int no no 4 4\n| esn char (10) no yes 16 11\n| min char (10) no yes 16 11\n| datetime timestamp no yes 8 8\n| opc0 char (3) no no 8 4\n| opc1 char (3) no no 8 4\n| opc2 char (3) no no 8 4\n| dpc0 char (3) no no 8 4\n| dpc1 char (3) no no 8 4\n| dpc2 char (3) no no 8 4\n| npa char (3) no no 8 4\n| nxx char (3) no no 8 4\n| rest char (4) no no 8 5\n| field0 int yes no 4 4\n| field1 char (4) yes no 8 5\n| field2 int yes no 4 4\n| field3 char (4) yes no 8 5\n| field4 int yes no 4 4\n| field5 char (4) yes no 8 5\n| field6 int yes no 4 4\n| field7 char (4) yes no 8 5\n| field8 int yes no 4 4\n| field9 char (4) yes no 8 5\n| ----- -----\n| 176 116\n\nIgnoring nulls for now, you have to add 32 bytes for a v7.2 heap tuple\nheader and 4 bytes for ItemIdData per tuple, ending up with 212 bytes\nper tuple or ca. 85 GB heap space for 432000000 tuples. Depending on\nfill factor similar calculations give some 30 GB for your index.\n\nNow if we had a datatype with only one byte for the string length,\nchar columns could be byte aligned and we'd have column sizes given\nunder \"mini\" in the table above. The columns would have to be\nrearranged according to alignment requirements.\n\nThus 60 bytes per heap tuple and 8 bytes per index tuple could be\nsaved, resulting in a database size of ~ 85 GB (index included). And\nI bet this would be significantly faster, too.\n\nHackers, do you think it's possible to hack together a quick and dirty\npatch, so that string length is represented by one byte? IOW can a\ndatabase be built that doesn't contain any char/varchar/text value\nlonger than 255 characters in the catalog?\n\nIf I'm not told that this is impossibly, I'd give it a try. Shridhar,\nif such a patch can be made available, would you be willing to test\nit?\n\nWhat can you do right now? Try using v7.3 beta and creating your\ntable WITHOUT OIDS. This saves 8 bytes per tuple; not much, but\nbetter save 4% than nothing.\n\nServus\n Manfred\n", "msg_date": "Mon, 07 Oct 2002 16:10:26 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 7 Oct 2002 at 16:10, Manfred Koizar wrote:\n> if i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\n> integer specifying the length followed by as many characters as the\n> length tells. On 32-bit Intel hardware this structure is aligned on a\n> 4-byte boundary.\n\nThat shouldn't be necessary for a char field as space is always pre-allocated. \nSounds like a possible area of imporvement to me, if that's the case..\n\n> Hackers, do you think it's possible to hack together a quick and dirty\n> patch, so that string length is represented by one byte? IOW can a\n> database be built that doesn't contain any char/varchar/text value\n> longer than 255 characters in the catalog?\n\nI say if it's a char field, there should be no indicator of length as it's not \nrequired. Just store those many characters straight ahead..\n\n> \n> If I'm not told that this is impossibly, I'd give it a try. Shridhar,\n> if such a patch can be made available, would you be willing to test\n> it?\n\nSure. But the server machine is not available this week. Some other project is \nusing it. So the results won't be out unless at least a week from now.\n\n\n> What can you do right now? Try using v7.3 beta and creating your\n> table WITHOUT OIDS. This saves 8 bytes per tuple; not much, but\n> better save 4% than nothing.\n\nIIRC there was some header optimisation which saved 4 bytes. So without OIDs \nthat should save 8. Would do that as first next thing.\n\nI talked to my friend regarding postgresql surpassing mysql substantially in \nthis test. He told me that the last test where postgresql took 23000+/150 sec \nfor load/index and mysql took 18,000+/130 index, postgresql was running in \ndefault configuration. He forgot to copy postgresql.conf to data directory \nafter he modified it.\n\nThis time results are correct. Postgresql loads data faster, indexes it faster \nand queries in almost same time.. Way to go..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:[email protected]\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Mon, 07 Oct 2002 19:48:31 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n> MySQL 3.23.52 with innodb transaction support: \n\n> 4 concurrent queries \t:- 257.36 ms\n> 40 concurrent queries\t:- 35.12 ms\n\n> Postgresql 7.2.2 \n\n> 4 concurrent queries \t\t:- 257.43 ms\n> 40 concurrent \tqueries\t\t:- 41.16 ms\n\nI find this pretty fishy. The extreme similarity of the 4-client\nnumbers seems improbable, from what I know of the two databases.\nI suspect your numbers are mostly measuring some non-database-related\noverhead --- communications overhead, maybe?\n\n> Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n> numbers include indexes. This is really going to be a problem when things are\n> deployed. Any idea how can it be taken down? \n\n7.3 should be a little bit better because of Manfred's work on reducing\ntuple header size --- if you create your tables WITHOUT OIDS, you should\nsave 8 bytes per row compared to earlier releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 10:30:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On 7 Oct 2002 at 10:30, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <[email protected]> writes:\n> > MySQL 3.23.52 with innodb transaction support: \n> \n> > 4 concurrent queries \t:- 257.36 ms\n> > 40 concurrent queries\t:- 35.12 ms\n> \n> > Postgresql 7.2.2 \n> \n> > 4 concurrent queries \t\t:- 257.43 ms\n> > 40 concurrent \tqueries\t\t:- 41.16 ms\n> \n> I find this pretty fishy. The extreme similarity of the 4-client\n> numbers seems improbable, from what I know of the two databases.\n> I suspect your numbers are mostly measuring some non-database-related\n> overhead --- communications overhead, maybe?\n\nI don't know but three numbers, postgresql/mysql/oracle all are 25x.xx ms. The \nclients were on same machie as of server. So no real area to point at..\n> \n> > Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n> > numbers include indexes. This is really going to be a problem when things are\n> > deployed. Any idea how can it be taken down? \n> \n> 7.3 should be a little bit better because of Manfred's work on reducing\n> tuple header size --- if you create your tables WITHOUT OIDS, you should\n> save 8 bytes per row compared to earlier releases.\n\nGot it..\n\nBye\n Shridhar\n\n--\nSweater, n.:\tA garment worn by a child when its mother feels chilly.\n\n", "msg_date": "Mon, 07 Oct 2002 20:09:55 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n> I say if it's a char field, there should be no indicator of length as\n> it's not required. Just store those many characters straight ahead..\n\nYour assumption fails when considering UNICODE or other multibyte\ncharacter encodings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 11:21:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Mon, 07 Oct 2002 19:48:31 +0530, \"Shridhar Daithankar\"\n<[email protected]> wrote:\n>I say if it's a char field, there should be no indicator of length as it's not \n>required. Just store those many characters straight ahead..\n\nThis is out of reach for a quick hack ...\n\n>Sure. But the server machine is not available this week. Some other project is \n>using it. So the results won't be out unless at least a week from now.\n\n :-)\n\n>This time results are correct. Postgresql loads data faster, indexes it faster \n>and queries in almost same time.. Way to go..\n\nGreat! And now let's work on making selects faster, too.\n\nServus\n Manfred\n", "msg_date": "Mon, 07 Oct 2002 17:22:41 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 7 Oct 2002 at 11:21, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <[email protected]> writes:\n> > I say if it's a char field, there should be no indicator of length as\n> > it's not required. Just store those many characters straight ahead..\n> \n> Your assumption fails when considering UNICODE or other multibyte\n> character encodings.\n\nCorrect but is it possible to have real char string when database is not \nunicode or when locale defines size of char, to be exact?\n\nIn my case varchar does not make sense as all strings are guaranteed to be of \ndefined length. While the argument you have put is correct, it's causing a disk \nspace leak, to say so.\n\nBye\n Shridhar\n\n--\nBoucher's Observation:\tHe who blows his own horn always plays the music\tseveral \noctaves higher than originally written.\n\n", "msg_date": "Tue, 08 Oct 2002 11:14:11 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Tue, Oct 08, 2002 at 11:14:11AM +0530, Shridhar Daithankar wrote:\n> On 7 Oct 2002 at 11:21, Tom Lane wrote:\n> \n> > \"Shridhar Daithankar\" <[email protected]> writes:\n> > > I say if it's a char field, there should be no indicator of length as\n> > > it's not required. Just store those many characters straight ahead..\n> > \n> > Your assumption fails when considering UNICODE or other multibyte\n> > character encodings.\n> \n> Correct but is it possible to have real char string when database is not \n> unicode or when locale defines size of char, to be exact?\n> \n> In my case varchar does not make sense as all strings are guaranteed to be of \n> defined length. While the argument you have put is correct, it's causing a disk \n> space leak, to say so.\n\nWell, maybe. But since 7.1 or so char() and varchar() simply became text\nwith some length restrictions. This was one of the reasons. It also\nsimplified a lot of code.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Tue, 8 Oct 2002 17:20:47 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"David Blood\" <[email protected]> writes:\n> > In Oracle you can Pin large objects into memory to prevent frequent\n> > reloads. Is there anyway to do this with Postgres?\n> \n> I can never understand why people think this would be a good idea.\n> If you're hitting a table frequently, it will stay in memory anyway\n> (either in Postgres shared buffers or kernel disk cache). If you're\n> not hitting it frequently enough to keep it swapped in, then whatever\n> is getting swapped in instead is probably a better candidate to be\n> occupying the space. \n\nAs I understand it, he's looking for a mechanism to prevent a single\nsequential scan on a table, larger than the buffer cache, to kick out\neverything else at once. But I agree with you that pinning other objects\nis just mucking with the symptoms instead of curing the desease.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Tue, 08 Oct 2002 09:32:50 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pinning a table into memory" }, { "msg_contents": "On Tue, 2002-10-08 at 02:20, Martijn van Oosterhout wrote:\n> On Tue, Oct 08, 2002 at 11:14:11AM +0530, Shridhar Daithankar wrote:\n> > On 7 Oct 2002 at 11:21, Tom Lane wrote:\n> > \n> > > \"Shridhar Daithankar\" <[email protected]> writes:\n> > > > I say if it's a char field, there should be no indicator of length as\n> > > > it's not required. Just store those many characters straight ahead..\n> > > \n> > > Your assumption fails when considering UNICODE or other multibyte\n> > > character encodings.\n> > \n> > Correct but is it possible to have real char string when database is not \n> > unicode or when locale defines size of char, to be exact?\n> > \n> > In my case varchar does not make sense as all strings are guaranteed to be of \n> > defined length. While the argument you have put is correct, it's causing a disk \n> > space leak, to say so.\n\nNot only that, but you get INSERT, UPDATE, DELETE and SELECT performance\ngains with fixed length records, since you don't get fragmentation.\n\nFor example:\nTABLE T\nF1 INTEGER;\nF2 VARCHAR(200)\n\nINSERT INTO T VALUES (1, 'FOO BAR');\nINSERT INTO T VALUES (2, 'SNAFU');\n\nNext,\nUPDATE T SET F2 = 'WIGGLE WAGGLE WUMPERSTUMPER' WHERE F1 = 1;\n\nUnless there is a big gap on disk between the 2 inserted records, \npostgresql must then look somewhere else for space to put the new\nversion of T WHERE F1 = 1.\n\nWith fixed-length records, you know exactly where you can put the\nnew value of F2, thus minimizing IO.\n\n> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n> with some length restrictions. This was one of the reasons. It also\n> simplified a lot of code.\n\nHow much simpler can you get than fixed-length records? \n\nOf course, then there are 2 code paths, 1 for fixed length, and\n1 for variable length.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 08:50:52 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n> Not only that, but you get INSERT, UPDATE, DELETE and SELECT performance\n> gains with fixed length records, since you don't get fragmentation.\n\nThat argument loses a lot of its force when you consider that Postgres\nuses non-overwriting storage management. We never do an UPDATE in-place\nanyway, and so it matters little whether the updated record is the same\nsize as the original.\n\n>> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n>> with some length restrictions. This was one of the reasons. It also\n>> simplified a lot of code.\n\n> How much simpler can you get than fixed-length records? \n\nIt's not simpler: it's more complicated, because you need an additional\ninput item to figure out the size of any given column in a record.\nMaking sure that that info is available every place it's needed is one\nof the costs of supporting a feature like this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 10:38:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On 8 Oct 2002 at 10:38, Tom Lane wrote:\n\n> Ron Johnson <[email protected]> writes:\n> It's not simpler: it's more complicated, because you need an additional\n> input item to figure out the size of any given column in a record.\n> Making sure that that info is available every place it's needed is one\n> of the costs of supporting a feature like this.\n\nI understand. Can we put this in say page header instead of tuple header. While \nall the arguments you have put are really good, the stellar redundancy \ncertainly can do with a mid-way solution.\n\nJust a thought..\n\nBye\n Shridhar\n\n--\nbit, n:\tA unit of measure applied to color. Twenty-four-bit color\trefers to \nexpensive $3 color as opposed to the cheaper 25\tcent, or two-bit, color that \nuse to be available a few years ago.\n\n", "msg_date": "Tue, 08 Oct 2002 20:11:47 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On Tue, 2002-10-08 at 09:38, Tom Lane wrote:\n> Ron Johnson <[email protected]> writes:\n> > Not only that, but you get INSERT, UPDATE, DELETE and SELECT performance\n> > gains with fixed length records, since you don't get fragmentation.\n> \n> That argument loses a lot of its force when you consider that Postgres\n> uses non-overwriting storage management. We never do an UPDATE in-place\n> anyway, and so it matters little whether the updated record is the same\n> size as the original.\n\nMust you update any relative indexes, in order to point to the\nnew location of the record?\n\n> >> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n> >> with some length restrictions. This was one of the reasons. It also\n> >> simplified a lot of code.\n> \n> > How much simpler can you get than fixed-length records? \n> \n> It's not simpler: it's more complicated, because you need an additional\n> input item to figure out the size of any given column in a record.\n\nWith fixed-length, why? From the metadata, you can compute the intra-\nrecord offsets. That's how it works with the commercial RDBMS that\nI use at work.\n\nOn that system, even variable-length records don't need record-size\nfields. Any repeating text (more that ~4 chars) is replaced with\nrun-length encoding. This includes the phantom spaces at the end\nof the field.\n\n> Making sure that that info is available every place it's needed is one\n> of the costs of supporting a feature like this.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 10:16:55 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance" }, { "msg_contents": "Ron, Shridhar,\n\nMaybe I missed something on this thread, but can either of you give me\nan example of a real database where the PostgreSQL approach of \"all\nstrings are TEXT\" versus the more traditional CHAR implementation have\nresulted in measurable performance loss?\n\nOtherwise, this discussion is rather academic ...\n\n-Josh Berkus\n", "msg_date": "Tue, 08 Oct 2002 08:33:53 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n> On Tue, 2002-10-08 at 09:38, Tom Lane wrote:\n>> That argument loses a lot of its force when you consider that Postgres\n>> uses non-overwriting storage management. We never do an UPDATE in-place\n>> anyway, and so it matters little whether the updated record is the same\n>> size as the original.\n\n> Must you update any relative indexes, in order to point to the\n> new location of the record?\n\nWe make new index entries for the new record, yes. Both the old and new\nrecords must be indexed (until one or the other is garbage-collected by\nVACUUM) so that transactions can find whichever version they are\nsupposed to be able to see according to the tuple visibility rules.\n\n>> It's not simpler: it's more complicated, because you need an additional\n>> input item to figure out the size of any given column in a record.\n\n> With fixed-length, why? From the metadata, you can compute the intra-\n> record offsets.\n\nSure, but you need an additional item of metadata than you otherwise\nwould (this is atttypmod, in Postgres terms). I'm not certain that the\ntypmod is available everyplace that would need to be able to figure out\nthe physical width of a column.\n\n> On that system, even variable-length records don't need record-size\n> fields. Any repeating text (more that ~4 chars) is replaced with\n> run-length encoding. This includes the phantom spaces at the end\n> of the field.\n\nInteresting that you should bring that up in the context of an argument\nfor supporting fixed-width fields ;-). Doesn't any form of data\ncompression bring you right back into variable-width land?\n\nPostgres' approach to data compression is that it's done per-field,\nand only on variable-width fields. We steal a couple of bits from the\nlength word to allow flagging of compressed and out-of-line values.\nIf we were to make CHAR(n) fixed-width then it would lose the ability\nto participate in either compression or out-of-line storage.\n\nBetween that and the multibyte-encoding issue, I think it's very\ndifficult to make a case that the general-purpose CHAR(n) type should\nbe implemented as fixed-width. If someone has a specialized application\nwhere they need a restricted fixed-width string type, it's not that\nhard to make a user-defined type that supports only a single column\nwidth (and thereby gets around the typmod issue). So I'm satisfied with\nsaying \"define your own type if you want this\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 11:51:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On Tue, 2002-10-08 at 10:33, Josh Berkus wrote:\n> Ron, Shridhar,\n> \n> Maybe I missed something on this thread, but can either of you give me\n> an example of a real database where the PostgreSQL approach of \"all\n> strings are TEXT\" versus the more traditional CHAR implementation have\n> resulted in measurable performance loss?\n\n??????\n\n> Otherwise, this discussion is rather academic ...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 12:42:20 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "\nRon,\n\n> > Maybe I missed something on this thread, but can either of you give me\n> > an example of a real database where the PostgreSQL approach of \"all\n> > strings are TEXT\" versus the more traditional CHAR implementation have\n> > resulted in measurable performance loss?\n>\n> ??????\n\nIn other words, if it ain't broke, don't fix it.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Oct 2002 15:44:36 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "\nRon,\n\n> > > > Maybe I missed something on this thread, but can either of you give\n> > > > me an example of a real database where the PostgreSQL approach of\n> > > > \"all strings are TEXT\" versus the more traditional CHAR\n> > > > implementation have resulted in measurable performance loss?\n> > >\n> > > ??????\n> >\n> > In other words, if it ain't broke, don't fix it.\n>\n> Well, does Really Slow Performance qualify as \"broke\"?\n\nThat's what I was asking. Can you explain where your slow performance is \nattibutable to the CHAR implementation issues? I missed that, if it was \nexplained earlier in the thread.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Oct 2002 16:36:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n<[email protected]> wrote:\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n\nShridhar,\n\nhere is an implementation of a set of user types: char3, char4,\nchar10. Put the attached files into a new directory contrib/fixchar,\nmake, make install, and run fixchar.sql through psql. Then create\nyour table as\n\tCREATE TABLE tbl (\n\ttype\t\tint,\n\tesn\t\tchar10,\n\tmin\t\tchar10,\n\tdatetime\ttimestamp,\n\topc0\t\tchar3,\n\t...\n\trest\t\tchar4,\n\tfield0\t\tint,\n\tfield1\t\tchar4,\n\t...\n\t)\n\nThis should save 76 bytes per heap tuple and 12 bytes per index tuple,\ngiving a database size of ~ 76 GB. I'd be very interested how this\naffects performance.\n\nCode has been tested for v7.2, it crashes on v7.3 beta 1. If this is\na problem, let me know.\n\nServus\n Manfred", "msg_date": "Wed, 09 Oct 2002 10:00:03 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 9 Oct 2002 at 10:00, Manfred Koizar wrote:\n\n> On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n> <[email protected]> wrote:\n> >Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n> \n> Shridhar,\n> \n> here is an implementation of a set of user types: char3, char4,\n> char10. Put the attached files into a new directory contrib/fixchar,\n> make, make install, and run fixchar.sql through psql. Then create\n> your table as\n> \tCREATE TABLE tbl (\n> \ttype\t\tint,\n> \tesn\t\tchar10,\n> \tmin\t\tchar10,\n> \tdatetime\ttimestamp,\n> \topc0\t\tchar3,\n> \t...\n> \trest\t\tchar4,\n> \tfield0\t\tint,\n> \tfield1\t\tchar4,\n> \t...\n> \t)\n> \n> This should save 76 bytes per heap tuple and 12 bytes per index tuple,\n> giving a database size of ~ 76 GB. I'd be very interested how this\n> affects performance.\n> \n> Code has been tested for v7.2, it crashes on v7.3 beta 1. If this is\n> a problem, let me know.\n\nThank you very much for this. I would certainly give it a try. Please be \npatient as next test is scheuled on monday.\n\nBye\n Shridhar\n\n--\nlove, n.:\tWhen it's growing, you don't mind watering it with a few tears.\n\n", "msg_date": "Wed, 09 Oct 2002 13:37:13 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 9 Oct 2002 at 10:00, Manfred Koizar wrote:\n\n> On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n> <[email protected]> wrote:\n> >Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n> \n> Shridhar,\n> \n> here is an implementation of a set of user types: char3, char4,\n> char10. Put the attached files into a new directory contrib/fixchar,\n> make, make install, and run fixchar.sql through psql. Then create\n> your table as\n\nI had a quick look in things. I think it's a great learning material for pg \ninternals..;-)\n\nI have a suggestion. In README, it should be worth mentioning that, new types \ncan be added just by changin Makefile. e.g. Changing line\n\nOBJS = char3.o char4.o char10.o\n\nto\n\nOBJS = char3.o char4.o char5.o char10.o \n\nwould add the datatype char5 as well. \n\nObviously this is for those who might not take efforts to read the source. ( \nPersonally I wouldn't have, had it been part of entire postgres source dump. \nJust would have done ./configure;make;make install)\n\nThanks for the solution. It wouldn't have occurred to me in ages to create a \ntype for this. I guess that's partly because never used postgresql beyond \nselect/insert/update/delete. Anyway should have been awake..\n\nThanks once again\n\n\nBye\n Shridhar\n\n--\nBut it's real. And if it's real it can be affected ... we may not be ableto \nbreak it, but, I'll bet you credits to Navy Beans we can put a dent in it.\t\t-- \ndeSalle, \"Catspaw\", stardate 3018.2\n\n", "msg_date": "Wed, 09 Oct 2002 13:55:28 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> here is an implementation of a set of user types: char3, char4,\n> char10.\n\nCoupla quick comments on these:\n\n> CREATE FUNCTION charNN_lt(charNN, charNN)\n> RETURNS boolean\n> AS '$libdir/fixchar'\n> LANGUAGE 'c';\n\n> bool\n> charNN_lt(char *a, char *b)\n> {\n> \treturn (strncmp(a, b, NN) < 0);\n> }/*charNN_lt*/\n\nThese functions are dangerous as written, because they will crash on\nnull inputs. I'd suggest marking them strict in the function\ndeclarations. Some attention to volatility declarations (isCachable\nor isImmutable) would be a good idea too.\n\nAlso, it'd be faster and more portable to write the functions with\nversion-1 calling conventions.\n\nUsing the Makefile to auto-create the differently sized versions is\na slick trick...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 09:32:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On 9 Oct 2002 at 9:32, Tom Lane wrote:\n\n> Manfred Koizar <[email protected]> writes:\n> > here is an implementation of a set of user types: char3, char4,\n> > char10.\n> \n> Coupla quick comments on these:\n> \n> > CREATE FUNCTION charNN_lt(charNN, charNN)\n> > RETURNS boolean\n> > AS '$libdir/fixchar'\n> > LANGUAGE 'c';\n> \n> > bool\n> > charNN_lt(char *a, char *b)\n> > {\n> > \treturn (strncmp(a, b, NN) < 0);\n> > }/*charNN_lt*/\n> \n> These functions are dangerous as written, because they will crash on\n> null inputs. I'd suggest marking them strict in the function\n> declarations. Some attention to volatility declarations (isCachable\n> or isImmutable) would be a good idea too.\n\nLet me add something. Using char* is bad idea. I had faced a situation recently \non HP-UX 11 that with a libc patch, isspace collapsed for char>127. Fix was to \nuse unsigned char. There are other places also where the input character is \nused as index to an array internally and can cause weird behaviour for values \n>127\n\nI will apply both the correction here. Will post the final stuff soon.\n\nBye\n Shridhar\n\n--\nHacker's Quicky #313:\tSour Cream -n- Onion Potato Chips\tMicrowave Egg Roll\t\nChocolate Milk\n\n", "msg_date": "Wed, 09 Oct 2002 19:11:09 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "I have a problem with the index of 1 table.\n\nI hava a table created :\n\tCREATE TABLE \"acucliart\" (\n \"cod_pto\" numeric(8,0) NOT NULL,\n \"cod_cli\" varchar(9) NOT NULL,\n \"mes\" numeric(2,0) NOT NULL,\n \"ano\" numeric(4,0) NOT NULL,\n \"int_art\" numeric(5,0) NOT NULL,\n \"cantidad\" numeric(12,2),\n \"ven_siv_to\" numeric(14,2),\n \"ven_civ_to\" numeric(14,2),\n \"tic_siv_to\" numeric(14,2),\n \"tic_civ_to\" numeric(14,2),\n \"visitas\" numeric(2,0),\n \"ult_vis\" date,\n \"ven_cos\" numeric(12,2),\n \"ven_ofe\" numeric(12,2),\n \"cos_ofe\" numeric(12,2),\n CONSTRAINT \"acucliart_pkey\"\n PRIMARY KEY (\"cod_cli\")\n);\n\nif i do this select:\n\texplain select * from acucliart where cod_cli=10000;\n\t\tpostgres use the index\n\t\tNOTICE: QUERY PLAN:\n\t\tIndex Scan using cod_cli_ukey on acucliart (cost=0.00..4.82 rows=1\nwidth=478)\n\nand this select\n\t\texplain select * from acucliart where cod_cli>10000;\n\t\tPostgres don't use the index:\n\t\tNOTICE: QUERY PLAN:\n\t\tSeq Scan on acucliart (cost=0.00..22.50 rows=333 width=478)\n\nwhy?\n\n\ntk\n\n", "msg_date": "Wed, 9 Oct 2002 18:56:41 +0200", "msg_from": "\"Jose Antonio Leo\" <[email protected]>", "msg_from_op": false, "msg_subject": "problem with the Index" }, { "msg_contents": "On Wed, 9 Oct 2002, Jose Antonio Leo wrote:\n\n> I have a problem with the index of 1 table.\n>\n> I hava a table created :\n> \tCREATE TABLE \"acucliart\" (\n> \"cod_pto\" numeric(8,0) NOT NULL,\n> \"cod_cli\" varchar(9) NOT NULL,\n> \"mes\" numeric(2,0) NOT NULL,\n> \"ano\" numeric(4,0) NOT NULL,\n> \"int_art\" numeric(5,0) NOT NULL,\n> \"cantidad\" numeric(12,2),\n> \"ven_siv_to\" numeric(14,2),\n> \"ven_civ_to\" numeric(14,2),\n> \"tic_siv_to\" numeric(14,2),\n> \"tic_civ_to\" numeric(14,2),\n> \"visitas\" numeric(2,0),\n> \"ult_vis\" date,\n> \"ven_cos\" numeric(12,2),\n> \"ven_ofe\" numeric(12,2),\n> \"cos_ofe\" numeric(12,2),\n> CONSTRAINT \"acucliart_pkey\"\n> PRIMARY KEY (\"cod_cli\")\n> );\n>\n> if i do this select:\n> \texplain select * from acucliart where cod_cli=10000;\n> \t\tpostgres use the index\n> \t\tNOTICE: QUERY PLAN:\n> \t\tIndex Scan using cod_cli_ukey on acucliart (cost=0.00..4.82 rows=1\n> width=478)\n>\n> and this select\n> \t\texplain select * from acucliart where cod_cli>10000;\n> \t\tPostgres don't use the index:\n> \t\tNOTICE: QUERY PLAN:\n> \t\tSeq Scan on acucliart (cost=0.00..22.50 rows=333 width=478)\n>\n> why?\n\nWell, how many rows are in the table? In the first case it estimates 1\nrow will be returned, in the second 333. Index scans are not always faster\nthan sequential scans as the percentage of the table to scan becomes\nlarger. If you haven't analyzed recently, you probably should do so and\nif you want to compare, set enable_seqscan=off and try an explain there\nand see what it gives you.\n\nAlso, why are you comparing a varchar(9) column with an integer?\n\n", "msg_date": "Wed, 9 Oct 2002 10:31:12 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] problem with the Index" }, { "msg_contents": "On Wed, 09 Oct 2002 09:32:50 -0400, Tom Lane <[email protected]>\nwrote:\n>Coupla quick comments on these:\n\nMy first attempt on user types; thanks for the tips.\n\n>These functions are dangerous as written, because they will crash on\n>null inputs. I'd suggest marking them strict in the function\n>declarations.\n\nI was not aware of this, just wondered why bpchar routines didn't\ncrash :-) Fixed.\n\n>Some attention to volatility declarations (isCachable\n>or isImmutable) would be a good idea too.\n>Also, it'd be faster and more portable to write the functions with\n>version-1 calling conventions.\n\nDone, too. In the meantime I've found out why it crashed with 7.3:\nINSERT INTO pg_opclass is now obsolete, have to use CREATE OPERATOR\nCLASS ...\n\nServus\n Manfred\n", "msg_date": "Wed, 09 Oct 2002 20:09:03 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Wed, 09 Oct 2002 10:00:03 +0200, I wrote:\n>here is an implementation of a set of user types: char3, char4,\n>char10.\n\nNew version available. As I don't want to spam the list with various\nversions until I get it right eventually, you can get it from\nhttp://members.aon.at/pivot/pg/fixchar20021010.tgz if you are\ninterested.\n\nWhat's new:\n\n. README updated (per Shridhar's suggestion)\n. doesn't crash on NULL (p. Tom)\n. version-1 calling conventions (p. Tom)\n. isCachable (p. Tom)\n. works for 7.2 (as delivered) and for 7.3 (make for73)\n\nShridhar, you were concerned about signed/unsigned chars; looking at\nthe code I can not see how this is a problem. So no change in this\nregard.\n\nThanks for your comments. Have fun!\n\nServus\n Manfred\n", "msg_date": "Thu, 10 Oct 2002 15:30:31 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "contrib/fixchar (Was: Large databases, performance)" }, { "msg_contents": "On 10 Oct 2002 at 15:30, Manfred Koizar wrote:\n\n> On Wed, 09 Oct 2002 10:00:03 +0200, I wrote:\n> >here is an implementation of a set of user types: char3, char4,\n> >char10.\n> \n> New version available. As I don't want to spam the list with various\n> versions until I get it right eventually, you can get it from\n> http://members.aon.at/pivot/pg/fixchar20021010.tgz if you are\n> interested.\n> \n> What's new:\n> \n> . README updated (per Shridhar's suggestion)\n> . doesn't crash on NULL (p. Tom)\n> . version-1 calling conventions (p. Tom)\n> . isCachable (p. Tom)\n> . works for 7.2 (as delivered) and for 7.3 (make for73)\n> \n> Shridhar, you were concerned about signed/unsigned chars; looking at\n> the code I can not see how this is a problem. So no change in this\n> regard.\n\nWell, this is not related to postgresql exactly but to summerise the problem, \nwith libc patch PHCO_19090 or compatible upwards, on HP-UX11, isspace does not \nwork correctly if input value is >127. Can cause lot of problem for an external \napp. It works fine with unsigned char\n\nDoes not make a difference from postgrersql point of view but would break non-\nenglish locale if they want to use this fix under some situation.\n\nBut I agree, unless somebody reports it, no point fixing it and we know the fix \nanyway..\n\n\nBye\n Shridhar\n\n--\nLive long and prosper.\t\t-- Spock, \"Amok Time\", stardate 3372.7\n\n", "msg_date": "Thu, 10 Oct 2002 19:19:11 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: contrib/fixchar (Was: Large databases, performance)" } ]
[ { "msg_contents": "\nsubscribe pgsql-performance\n\n-- \nBest Regards,\n \nMike Benoit\nNetNation Communication Inc.\nSystems Engineer\nTel: 604-684-6892 or 888-983-6600\n ---------------------------------------\n \n Disclaimer: Opinions expressed here are my own and not \n necessarily those of my employer\n\n", "msg_date": "03 Oct 2002 09:29:21 -0700", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": true, "msg_subject": "subscribe pgsql-performance" } ]
[ { "msg_contents": "Folks,\n\nSorry for the double-quoting here. I sent this to just Ron by\naccident. My original question is double-quoted, Ron is quoted, and my\nresponses are below. Thanks!\n\n> > Ok, I'm still confused.\n> > \n> > I'm updating a (not not indexed) field in a 117,000 row table based\n> on \n> > information in another 117,000 row table. The update is an\n> integer, and the \n> > linking fields are indexed. Yet the two queries are flattening my\n> \n> > dual-processor, RAID5 database server for up to 11 minutes ...\n> using 230mb \n> > ram the entire time. I simply can't believe that these two\n> queries are that \n> > difficult.\n> \n> So there's no index on elbs_matter_links.case_id? From your original\n> \n> post, I thought that there *is* an index on that field.\n\nI'm now dropping it before the update. Unfortunately, dropping the\nindex made no appreciable gain in performance.\n\n> > I've increased the memory available to the update to 256mb, and\n> tried forcing \n> > an index scan ... to no avail. Ideas, please?\n> > \n> > The queries:\n> > \n> > UPDATE elbs_matter_links SET case_id = case_clients.case_id\n> > FROM case_clients\n> > WHERE elbs_matter_links.mmatter = case_clients.matter_no;\n> \n> What happens if you run the query:\n> SELECT eml.case_id, cc.case_id, eml.mmatter, cc.matter_no\n> FROM elbs_matter_links eml,\n> case_clients cc\n> WHERE eml.mmatter = cc.matter_no;\n> \n> That, for all intents and purposes, is your UPDATE statement, just\n> without doing the UPDATE. How fast does it run?\n\nSlowly. It takes about 60 seconds to return data. This may be the\nproblem. Thoughts? Here's EXPLAIN output:\n\nHash Join (cost=3076.10..91842.88 rows=108648 width=40)\n -> Seq Scan on elbs_matter_links eml (cost=0.00..85641.87\nrows=117787 width=20)\n -> Hash (cost=2804.48..2804.48 rows=108648 width=20)\n -> Seq Scan on case_clients cc (cost=0.00..2804.48\nrows=108648 width=20)\n\nAccording to the parser, using the indexes would be worse:\n\nMerge Join (cost=0.00..520624.38 rows=108648 width=40)\n -> Index Scan using idx_eml_mmatter on elbs_matter_links eml\n (cost=0.00..451735.00 rows=117787 width=20)\n -> Index Scan using idx_caseclients_matter on case_clients cc\n (cost=0.00..66965.20 rows=108648 width=20)\n\nThough in practice, a forced index scan returns rows in about 60\nseconds, same as the SeqScan version.\n\nAll of this seems very costly for a query that, while it does return a\nlot of rows, is essentially a very simple query. \n\nMore importantly, on the hardware I'm using, I would expect better\nperformance that I get on my laptop ... and I'm not seeing it. I just\ncan't believe that the simple query above could soak 200mb of RAM for a\nfull 60 seconds to return a result. It's like queries over a certain\nresult size on the system choke postgres.\n\n\nMy reference data below:\n==============================================\n\n> \n> > UPDATE elbs_matter_links SET case_id = cases.case_id\n> > FROM cases\n> > WHERE elbs_matter_links.docket = cases.docket\n> > AND elbs_matter_links.case_id IS NULL;\n> > \n> > \n> > EXPLAIN output:\n> > \n> > Hash Join (cost=4204.83..39106.77 rows=8473 width=299)\n> > -> Index Scan using idx_eml_mmatter on elbs_matter_links \n> > (cost=0.00..34668.94 rows=8473 width=279)\n> > -> Hash (cost=2808.38..2808.38 rows=109038 width=20)\n> > -> Seq Scan on case_clients (cost=0.00..2808.38\n> rows=109038 \n> > width=20)\n> > \n> > Nested Loop (cost=0.00..32338.47 rows=99 width=300)\n> > -> Seq Scan on cases (cost=0.00..9461.97 rows=4297 width=21)\n> > -> Index Scan using idx_eml_docket on elbs_matter_links\n> (cost=0.00..5.31 \n> > rows=1 width=279)\n> > \n> > Table defintions:\n> > \n> > Table \"elbs_matter_links\"\n> > Column | Type | Modifiers\n> > ------------------+-----------------------+-----------------------\n> > mmatter | character varying(15) | not null\n> > case_id | integer |\n> > matter_check | character varying(20) | not null default 'OK'\n> > docket | character varying(50) |\n> > case_name | character varying(50) |\n> > practice | character varying(50) |\n> > opp_counsel_name | character varying(50) |\n> > opp_counsel_id | integer |\n> > act_type | character varying(10) |\n> > lead_case_id | integer |\n> > lead_case_docket | character varying(50) |\n> > disease | character varying(50) |\n> > docket_no | character varying(25) |\n> > juris_state | character varying(6) |\n> > juris_local | character varying(20) |\n> > status | smallint | not null default 1\n> > client_id | integer |\n> > office_loc | character varying(5) |\n> > date_filed | date |\n> > date_served | date |\n> > date_resolved | date |\n> > case_status | character varying(5) |\n> > settle_amount | numeric(12,2) | default 0\n> > narrative | text |\n> > comment | character varying(50) |\n> > client_no | character varying(10) |\n> > juris_id | integer |\n> > Indexes: idx_eml_check,\n> > idx_eml_docket,\n> > idx_eml_mmatter\n> > Primary key: elbs_matter_links_pkey\n> > \n> > Table \"case_clients\"\n> > Column | Type |\n> Modifiers\n> >\n>\n------------------+-----------------------+----------------------------------------------------\n> > case_client_id | integer | not null default \n> > nextval('case_clients_seq'::text)\n> > case_id | integer | not null\n> > client_id | integer | not null\n> > office_loc | character varying(5) |\n> > date_filed | date |\n> > date_served | date |\n> > date_resolved | date |\n> > matter_no | character varying(15) | not null\n> > case_status | character varying(5) | not null\n> > settle_amount | numeric(14,2) | not null default 0\n> > matter_narrative | text |\n> > comment | character varying(50) |\n> > Indexes: idx_case_clients_client,\n> > idx_caseclients_case,\n> > idx_caseclients_matter,\n> > idx_caseclients_resolved,\n> > idx_caseclients_served,\n> > idx_caseclients_status\n> > Primary key: case_clients_pkey\n> > \n> > \n> > Table \"cases\"\n> > Column | Type |\n> Modifiers\n> >\n>\n------------------+-----------------------+---------------------------------------------\n> > case_id | integer | not null default \n> > nextval('cases_seq'::text)\n> > docket | character varying(50) | not null\n> > case_name | character varying(50) | not null\n> > practice | character varying(50) | not null\n> > opp_counsel_name | character varying(50) |\n> > opp_counsel_id | integer |\n> > act_type | character varying(10) |\n> > lead_case_id | integer |\n> > lead_case_docket | character varying(50) |\n> > disease | character varying(50) |\n> > docket_no | character varying(25) | not null\n> > juris_state | character varying(6) | not null\n> > juris_local | character varying(20) |\n> > tgroup_id | integer |\n> > status | smallint | not null default 1\n> > juris_id | integer |\n> > Indexes: idx_case_cases_juris,\n> > idx_cases_docket,\n> > idx_cases_lead,\n> > idx_cases_name,\n> > idx_cases_status,\n> > idx_cases_tgroup,\n> > idx_lower_case_name\n> > \n> > \n> > \n> > -- \n> > Josh Berkus\n> > [email protected]\n> > Aglio Database Solutions\n> > San Francisco\n> -- \n> +------------------------------------------------------------+\n> | Ron Johnson, Jr. mailto:[email protected] |\n> | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> | |\n> | \"What other evidence do you have that they are terrorists, |\n> | other than that they trained in these camps?\" |\n> | 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n> | men arrested near Buffalo NY |\n> +------------------------------------------------------------+\n> \n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology [email protected]\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n", "msg_date": "Fri, 04 Oct 2002 08:54:56 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "On Fri, Oct 04, 2002 at 08:54:56AM -0700, Josh Berkus wrote:\n\n> Slowly. It takes about 60 seconds to return data. This may be the\n> problem. Thoughts? Here's EXPLAIN output:\n\n[. . .]\n\n> According to the parser, using the indexes would be worse:\n\nHave you run this with EXPLAIN ANALYSE? It will actually perform the\nnecessary steps, so it will reveal if the planner is getting\nsomething wrong.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 4 Oct 2002 12:44:03 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "\nAndrew,\n\n> Have you run this with EXPLAIN ANALYSE? It will actually perform the\n> necessary steps, so it will reveal if the planner is getting\n> something wrong.\n\nHere it is:\n\nHash Join (cost=3076.10..91842.88 rows=108648 width=40) (actual \ntime=18625.19..22823.39 rows=108546 loops=1)\n -> Seq Scan on elbs_matter_links eml (cost=0.00..85641.87 rows=117787 \nwidth=20) (actual time=18007.69..19515.63 rows=117787 loops=1)\n -> Hash (cost=2804.48..2804.48 rows=108648 width=20) (actual \ntime=602.12..602.12 rows=0 loops=1)\n -> Seq Scan on case_clients cc (cost=0.00..2804.48 rows=108648 \nwidth=20) (actual time=5.18..370.68 rows=108648 loops=1)\nTotal runtime: 22879.26 msec\n\nThe above doesn't seem bad, except that this is some serious hardware in this \nsystem and 23 seconds right after VACUUM ANALYZE is too long. I've a feeling \nthat I botched one of my postgresql.conf parameters or something.\n\nI'll do an explain for the UPDATE query later, when the users are off the \nsystem.\n\n-Josh Berkus\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 4 Oct 2002 11:13:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "On Fri, Oct 04, 2002 at 11:13:09AM -0700, Josh Berkus wrote:\n> \n> Andrew,\n> \n> > Have you run this with EXPLAIN ANALYSE? It will actually perform the\n> > necessary steps, so it will reveal if the planner is getting\n> > something wrong.\n> \n> Here it is:\n\nOops, sorry. What if you force the index use here? Just because the\nplanner thinks that's more expensive doesn't mean that it is.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 4 Oct 2002 14:25:23 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "\nAndrew,\n\n> Oops, sorry. What if you force the index use here? Just because the\n> planner thinks that's more expensive doesn't mean that it is.\n\nYeah, I tried it ... no faster, no slower, really.\n\nBTW, in case you missed it, the real concern is that an UPDATE query similar \nto the SELECT query we are discussing takes over 10 minutes, which on this \nhardware is ridiculous. Robert suggested that we test the SELECT query to \nsee if there were general performance problems; apparently, there are.\n\nThe hardware I'm using is:\ndual-processor Athalon 1400mhz motherboard\nraid 5 UW SCSI drive array with 3 drives\n512mb DDR RAM\nSuSE Linux 7.3 (Kernel 2.4.10)\nPostgres is on its own LVM partition\nPostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n\t(will upgrade to 7.2.3 very soon)\nPostgresql.conf has: fdatasync, various chared memory tuned to allocate 256mb \nto postgres (which seems to be working correctly).\nDebug level 2.\n\nWhen the UPDATE query takes a long time, I generally can watch the log hover \nin the land of \"Reaping dead child processes\" for 30-90 seconds per \niteration.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 4 Oct 2002 12:09:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "On Fri, Oct 04, 2002 at 12:09:42PM -0700, Josh Berkus wrote:\n\n> BTW, in case you missed it, the real concern is that an UPDATE query similar \n> to the SELECT query we are discussing takes over 10 minutes, which on this \n> hardware is ridiculous. Robert suggested that we test the SELECT query to \n> see if there were general performance problems; apparently, there are.\n\nYes, that's my thought, too.\n\n> Postgresql.conf has: fdatasync, various chared memory tuned to allocate 256mb \n> to postgres (which seems to be working correctly).\n\nHmm. Are you swapping? Lots of temp files? (I presume you've been\nover all that.) Half your physical memory seems pretty dangerous to\nme. If oyu reduce that, does it help?\n\n> When the UPDATE query takes a long time, I generally can watch the log hover \n> in the land of \"Reaping dead child processes\" for 30-90 seconds per \n> iteration.\n\nIck. Hmm. What sort of numbers do you get from vmstat, iostat, sar,\nand friends?\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 4 Oct 2002 15:24:15 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Hash Join (cost=3076.10..91842.88 rows=108648 width=40) (actual \n> time=18625.19..22823.39 rows=108546 loops=1)\n> -> Seq Scan on elbs_matter_links eml (cost=0.00..85641.87 rows=117787 \n> width=20) (actual time=18007.69..19515.63 rows=117787 loops=1)\n> -> Hash (cost=2804.48..2804.48 rows=108648 width=20) (actual \n> time=602.12..602.12 rows=0 loops=1)\n> -> Seq Scan on case_clients cc (cost=0.00..2804.48 rows=108648 \n> width=20) (actual time=5.18..370.68 rows=108648 loops=1)\n> Total runtime: 22879.26 msec\n\nHm. Why does it take 19500 milliseconds to read 117787 rows from\nelbs_matter_links, if 108648 rows can be read from case_clients in 370\nmsec? And why does the output show that the very first of those rows\nwas returned only after 18000 msec?\n\nI am suspicious that this table has a huge number of empty pages in it,\nmostly at the beginning. If so, a VACUUM FULL would help. (Try\n\"vacuum full verbose elbs_matter_links\" and see if it indicates it's\nreclaiming any large number of pages.)\n\nIf that proves to be the answer, you need to look to your FSM\nparameters, and perhaps arrange for more frequent regular vacuums\nof this table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 15:41:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> When the UPDATE query takes a long time, I generally can watch the log hover \n> in the land of \"Reaping dead child processes\" for 30-90 seconds per \n> iteration.\n\nUh ... would you translate that observation into English please? Or\nbetter, provide the log output you're looking at?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 15:45:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed " }, { "msg_contents": "\nTom,\n\n> I am suspicious that this table has a huge number of empty pages in it,\n> mostly at the beginning. If so, a VACUUM FULL would help. (Try\n> \"vacuum full verbose elbs_matter_links\" and see if it indicates it's\n> reclaiming any large number of pages.)\n\nThank you. Aha.\n\nThat appears to have been the main problem; apparently, at some time during my \ntinkering, I dumped most of the rows from elbs_matter_links a couple of \ntimes. Ooops.\n\nI'll post the new situation when I test the update queries tonight.\n\n-- \nJosh Berkus\[email protected]\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 4 Oct 2002 14:01:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparitive UPDATE speed" } ]
[ { "msg_contents": "\n> if i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\n> integer specifying the length followed by as many characters as the\n> length tells. On 32-bit Intel hardware this structure is aligned on a\n> 4-byte boundary.\n\nYes.\n\n> | opc0 char (3) no no 8 4\n> | opc1 char (3) no no 8 4\n> | opc2 char (3) no no 8 4\n\n> Hackers, do you think it's possible to hack together a quick and dirty\n> patch, so that string length is represented by one byte? IOW can a\n> database be built that doesn't contain any char/varchar/text value\n> longer than 255 characters in the catalog?\n\nSince he is only using fixchar how about doing a fixchar implemetation, that \ndoes not store length at all ? It is the same for every row anyways !\n\nAndreas\n", "msg_date": "Mon, 7 Oct 2002 17:42:12 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Mon, Oct 07, 2002 at 05:42:12PM +0200, Zeugswetter Andreas SB SD wrote:\n> > Hackers, do you think it's possible to hack together a quick and dirty\n> > patch, so that string length is represented by one byte? IOW can a\n> > database be built that doesn't contain any char/varchar/text value\n> > longer than 255 characters in the catalog?\n> \n> Since he is only using fixchar how about doing a fixchar implemetation, that \n> does not store length at all ? It is the same for every row anyways !\n\nRemember that in Unicode, 1 char != 1 byte. In fact, any encoding that's not\nLatin will have a problem. I guess you could put a warning on it: not for\nuse for asian character sets. So what do you do if someone tries to insert\nsuch a string anyway?\n\nPerhaps a better approach is to vary the number of bytes used for the\nlength. So one byte for lengths < 64, two bytes for lengths < 16384.\nUnfortunatly, two bits in the length are already used (IIRC) for other\nthings making it a bit more tricky.\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Wed, 9 Oct 2002 08:51:11 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" } ]
[ { "msg_contents": "Hello all,\n\nI am experiencing slow db performance. I have vacuumed, analyzed, reindexed\nusing the force option and performance remains the same - dog-slow :( If I\ndrop and recreate the database, performance is normal, so this suggests a\nproblem with the indexes? I also took a look at the postgresql.conf and all\nappears fine. There are many instances of the same database running on\ndifferent servers and not all servers are experiencing the problem.\n\nThanks in advance for any suggestions.\n\nMarie\n\n\n", "msg_date": "Mon, 7 Oct 2002 14:22:09 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "sloooow query" }, { "msg_contents": "\nMarie,\n\n> I am experiencing slow db performance. I have vacuumed, analyzed, reindexed\n> using the force option and performance remains the same - dog-slow :( If I\n> drop and recreate the database, performance is normal, so this suggests a\n> problem with the indexes? I also took a look at the postgresql.conf and all\n> appears fine. There are many instances of the same database running on\n> different servers and not all servers are experiencing the problem.\n\nPlease post the following:\n1) A copy of the relevant portions of your database schema.\n2) The query that is running slowly.\n3) The results of running EXPLAIN on that query.\n4) Your PostgreSQL version and operating system\n5) Any other relevant information about your databases, such as the quantity \nof inserts and deletes on the relevant tables.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 7 Oct 2002 12:29:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "Josh Berkus wrote:\n> \n> Marie,\n> \n> > I am experiencing slow db performance. I have vacuumed, analyzed, reindexed\n> > using the force option and performance remains the same - dog-slow :( If I\n> > drop and recreate the database, performance is normal, so this suggests a\n> > problem with the indexes? I also took a look at the postgresql.conf and all\n> > appears fine. There are many instances of the same database running on\n> > different servers and not all servers are experiencing the problem.\n> \n> Please post the following:\n> 1) A copy of the relevant portions of your database schema.\n> 2) The query that is running slowly.\n> 3) The results of running EXPLAIN on that query.\n> 4) Your PostgreSQL version and operating system\n> 5) Any other relevant information about your databases, such as the quantity\n> of inserts and deletes on the relevant tables.\n\n6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and\nwal_files settings from your postgresql.conf file, if possible.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 08 Oct 2002 05:30:03 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "Josh,\n\nThanks for the reply.\n\nI pg_dumped the first database having performance problems and reloaded it\ninto a new database on the same server. The query ran normally when I\nreloaded it. There is no difference in hardware, schema or anything else.\n\nproject=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\n[mtuite@area52 mtuite]$ uname -a\nLinux area52.spacedock.com 2.4.7-10 #1 Thu Sep 6 17:27:27 EDT 2001 i686\nunknown\n\n\nBelow is the explain for the reload.\n\nbm221=# \\i bad.qry\npsql:bad.qry:78: NOTICE: QUERY PLAN:\n\nSort (cost=273.71..273.71 rows=1 width=237) (actual time=143.82..143.96\nrows=181 loops=1)\n -> Group (cost=273.53..273.70 rows=1 width=237) (actual\ntime=136.98..140.78 rows=181 loops=1)\n -> Sort (cost=273.53..273.53 rows=7 width=237) (actual\ntime=136.95..137.11 rows=181 loops=1)\n -> Merge Join (cost=273.37..273.43 rows=7 width=237) (actual\ntime=124.41..129.72 rows=181 loops=1)\n -> Sort (cost=162.24..162.24 rows=7 width=216) (actual\ntime=51.83..52.00 rows=181 loops=1)\n -> Subquery Scan student_set\n(cost=161.09..162.14 rows=7 width=216) (actual time=48.12..50.49 rows=181\nloops=1)\n -> Unique (cost=161.09..162.14 rows=7\nwidth=216) (actual time=48.10..49.45 rows=181 loops=1)\n -> Sort (cost=161.09..161.09 rows=70\nwidth=216) (actual time=48.09..48.26 rows=181 loops=1)\n -> Hash Join\n(cost=130.58..158.96 rows=70 width=216) (actual time=43.26..47.11 rows=181\nloops=1)\n -> Seq Scan on classes c\n(cost=0.00..20.00 rows=1000 width=72) (actual time=0.12..1.78 rows=332\nloops=1)\n -> Hash\n(cost=130.55..130.55 rows=14 width=144) (actual time=43.02..43.02 rows=0\nloops=1)\n -> Hash Join\n(cost=105.38..130.55 rows=14 width=144) (actual time=31.13..42.44 rows=181\nloops=1)\n -> Seq Scan\non user_common uc (cost=0.00..20.00 rows=1000 width=80) (actual\ntime=0.12..7.07 rows=1045 loops=1)\n -> Hash\n(cost=105.37..105.37 rows=3 width=64) (actual time=30.91..30.91 rows=0\nloops=1)\n -> Hash\nJoin (cost=77.46..105.37 rows=3 width=64) (actual time=4.79..30.46 rows=181\nloops=1)\n ->\nSeq Scan on student_class_rlt scr (cost=0.00..22.50 rows=995 width=24)\n(actual time=0.25..23.74 rows=527 loops=1)\n ->\nHash (cost=77.45..77.45 rows=5 width=40) (actual time=4.02..4.02 rows=0\nloops=1)\n \n -> Hash Join (cost=52.38..77.45 rows=5 width=40) (actual\ntime=3.28..3.96 rows=27 loops=1)\n \n -> Seq Scan on bm_subscriptions_rlt bsr (cost=0.00..20.00\nrows=1000 width=8) (actual time=0.11..0.47 rows=114 loops=1)\n \n -> Hash (cost=52.38..52.38 rows=1 width=32) (actual\ntime=3.10..3.10 rows=0 loops=1)\n \n -> Hash Join (cost=4.83..52.38 rows=1 width=32) (actual\ntime=2.23..3.07 rows=11 loops=1)\n \n -> Seq Scan on bm_publications bp (cost=0.00..47.50\nrows=11 width=12) (actual time=1.49..2.25 rows=11 loops=1)\n\n -> Hash (cost=4.82..4.82 rows=1 width=20) (actual time=0.63..0.63\nrows=0 loops=1)\n \n -> Index Scan using bm_publication_events_pkey\non bm_publication_events bpe (cost=0.00..4.82 rows=1 width=20) (actual\ntime=0.60..0.61 rows=1 loops=1)\n -> Sort (cost=111.13..111.13 rows=18 width=21) (actual\ntime=72.51..73.15 rows=770 loops=1)\n -> Subquery Scan participation_set\n(cost=22.51..110.75 rows=18 width=21) (actual time=1.32..57.28 rows=809\nloops=1)\n -> Hash Join (cost=22.51..110.75 rows=18\nwidth=21) (actual time=1.30..52.21 rows=809 loops=1)\n -> Seq Scan on bm_user_results bur\n(cost=0.00..70.01 rows=3601 width=17) (actual time=0.14..18.53 rows=3601\nloops=1)\n -> Hash (cost=22.50..22.50 rows=5\nwidth=4) (actual time=0.91..0.91 rows=0 loops=1)\n -> Seq Scan on bm_publications\nbp (cost=0.00..22.50 rows=5 width=4) (actual time=0.33..0.71 rows=98\nloops=1)\nTotal runtime: 145.69 msec\n\nEXPLAIN\nbm221=#\n\n\nHere is the explain from the original database:\n\nproject=# \\i bad.qry\npsql:bad.qry:78: NOTICE: QUERY PLAN:\n\nSort (cost=337.23..337.23 rows=1 width=237) (actual time=14903.87..14904.05\nrows=181 loops=1)\n -> Group (cost=337.19..337.22 rows=1 width=237) (actual\ntime=14895.90..14900.55 rows=181 loops=1)\n -> Sort (cost=337.19..337.19 rows=1 width=237) (actual\ntime=14895.87..14896.09 rows=181 loops=1)\n -> Nested Loop (cost=214.62..337.18 rows=1 width=237)\n(actual time=149.50..14886.63 rows=181 loops=1)\n -> Subquery Scan student_set (cost=208.82..208.84\nrows=1 width=115) (actual time=64.03..69.44 rows=181 loops=1)\n -> Unique (cost=208.82..208.84 rows=1 width=115)\n(actual time=64.02..67.25 rows=181 loops=1)\n -> Sort (cost=208.82..208.82 rows=1\nwidth=115) (actual time=64.01..64.36 rows=181 loops=1)\n -> Nested Loop (cost=16.54..208.81\nrows=1 width=115) (actual time=5.21..62.66 rows=181 loops=1)\n -> Nested Loop\n(cost=16.54..203.55 rows=1 width=88) (actual time=5.11..52.60 rows=181\nloops=1)\n -> Hash Join\n(cost=16.54..197.63 rows=1 width=64) (actual time=4.55..37.75 rows=181\nloops=1)\n -> Seq Scan on\nstudent_class_rlt scr (cost=0.00..178.16 rows=574 width=24) (actual\ntime=0.02..29.59 rows=527 loops=1)\n -> Hash\n(cost=16.54..16.54 rows=2 width=40) (actual time=3.84..3.84 rows=0 loops=1)\n -> Hash Join\n(cost=13.80..16.54 rows=2 width=40) (actual time=2.91..3.77 rows=27 loops=1)\n -> Seq\nScan on bm_subscriptions_rlt bsr (cost=0.00..2.14 rows=114 width=8) (actual\ntime=0.01..0.50 rows=114 loops=1)\n -> Hash\n(cost=13.80..13.80 rows=2 width=32) (actual time=2.81..2.81 rows=0 loops=1)\n ->\nHash Join (cost=1.06..13.80 rows=2 width=32) (actual time=1.74..2.78\nrows=11 loops=1)\n \n -> Seq Scan on bm_publications bp (cost=0.00..12.65 rows=11 width=12)\n(actual time=1.56..2.51 rows=11 loops=1)\n \n -> Hash (cost=1.06..1.06 rows=1 width=20) (actual time=0.06..0.06\nrows=0 loops=1)\n \n -> Seq Scan on bm_publication_events bpe (cost=0.00..1.06 rows=1\nwidth=20) (actual time=0.04..0.05 rows=1 loops=1)\n -> Index Scan using\nuser_common_pkey on user_common uc (cost=0.00..5.90 rows=1 width=24)\n(actual time=0.05..0.06 rows=1 loops=181)\n -> Index Scan using class_pkey\non classes c (cost=0.00..5.25 rows=1 width=27) (actual time=0.03..0.04\nrows=1 loops=181)\n -> Subquery Scan participation_set (cost=5.79..109.63\nrows=1248 width=21) (actual time=1.19..78.18 rows=816 loops=181)\n -> Hash Join (cost=5.79..109.63 rows=1248\nwidth=21) (actual time=1.18..71.10 rows=816 loops=181)\n -> Seq Scan on bm_user_results bur\n(cost=0.00..70.16 rows=3616 width=17) (actual time=0.01..20.96 rows=3620\nloops=181)\n -> Hash (cost=5.55..5.55 rows=98 width=4)\n(actual time=1.05..1.05 rows=0 loops=181)\n -> Seq Scan on bm_publications bp\n(cost=0.00..5.55 rows=98 width=4) (actual time=0.33..0.82 rows=98 loops=181)\nTotal runtime: 14905.87 msec\n\nEXPLAIN\nproject=#\n\n\nHere is the query:\n\nexplain analyze\nselect\n student_set.pub_id as pub_id,\n student_set.class_id as class,\n student_set.class_name as class_name,\n student_set.user_id as student,\n student_set.first_name,\n student_set.last_name,\n participation_set.started,\n participation_set.complete,\n day,month\n\nfrom\n (\n\n select distinct\n scr.user_id,\n scr.class_id,\n uc.first_name,\n uc.last_name,\n bp.bm_publication_id as pub_id,\n c.class_name\n from student_class_rlt scr,\n user_common uc,\n bm_subscriptions_rlt bsr,\n bm_publications bp CROSS JOIN\n bm_publication_events bpe,\n classes c\n where\n bpe.bm_publication_event_id = 4\n and bpe.bm_publication_event_id =\nbp.bm_publication_event_id\n and bp.bm_series_id = bsr.bm_series_id\n and bsr.class_id = scr.class_id\n and scr.class_id = c.class_id\n and (scr.end_date is null or scr.end_date >=\nbpe.due_date)\n and scr.start_date <= bpe.publication_date\n and scr.status_id != 2\n and scr.user_id = uc.user_id\nand bp.bm_publication_id in (\n4,25,1,3,26,19,\n,11,27,90,20,28\n)\n\n ) student_set\n left join\n (\n\n select user_id,\n initial_timestmp as started,\n to_char( initial_timestmp, 'MM/DD' ) as\nday,\n to_char( initial_timestmp, 'Month YYYY' )\nas month,\n complete,\n bur.bm_publication_id as pub_id\n from\n bm_publications bp,\n bm_user_results bur\n where\n bp.bm_publication_event_id = 4\n and bp.bm_publication_id = bur.bm_publication_id\n\n\n ) participation_set\n on\n (\n student_set.user_id =\nparticipation_set.user_id\n and student_set.pub_id =\nparticipation_set.pub_id\n )\n group by student_set.pub_id, class, class_name, student,\nlast_name, first_name, started, complete, day, month\n order by student_set.pub_id, class, last_name, month, day\n\n;\n\n\nThanks.\n\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Josh Berkus\n> Sent: Monday, October 07, 2002 2:29 PM\n> To: [email protected]; [email protected]\n> Subject: Re: [pgsql-performance] sloooow query\n>\n>\n>\n> Marie,\n>\n> > I am experiencing slow db performance. I have vacuumed,\n> analyzed, reindexed\n> > using the force option and performance remains the same -\n> dog-slow :( If I\n> > drop and recreate the database, performance is normal, so this\n> suggests a\n> > problem with the indexes? I also took a look at the\n> postgresql.conf and all\n> > appears fine. There are many instances of the same database running on\n> > different servers and not all servers are experiencing the problem.\n>\n> Please post the following:\n> 1) A copy of the relevant portions of your database schema.\n> 2) The query that is running slowly.\n> 3) The results of running EXPLAIN on that query.\n> 4) Your PostgreSQL version and operating system\n> 5) Any other relevant information about your databases, such as\n> the quantity\n> of inserts and deletes on the relevant tables.\n>\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n", "msg_date": "Mon, 7 Oct 2002 14:49:16 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "Here is a show all:\n\nThanks,\n\n\nproject-# ;\nNOTICE: enable_seqscan is on\nNOTICE: enable_indexscan is on\nNOTICE: enable_tidscan is on\nNOTICE: enable_sort is on\nNOTICE: enable_nestloop is on\nNOTICE: enable_mergejoin is on\nNOTICE: enable_hashjoin is on\nNOTICE: ksqo is off\nNOTICE: geqo is on\nNOTICE: tcpip_socket is on\nNOTICE: ssl is off\nNOTICE: fsync is on\nNOTICE: silent_mode is off\nNOTICE: log_connections is off\nNOTICE: log_timestamp is off\nNOTICE: log_pid is off\nNOTICE: debug_print_query is off\nNOTICE: debug_print_parse is off\nNOTICE: debug_print_rewritten is off\nNOTICE: debug_print_plan is off\nNOTICE: debug_pretty_print is off\nNOTICE: show_parser_stats is off\nNOTICE: show_planner_stats is off\nNOTICE: show_executor_stats is off\nNOTICE: show_query_stats is off\nNOTICE: stats_start_collector is on\nNOTICE: stats_reset_on_server_start is on\nNOTICE: stats_command_string is off\nNOTICE: stats_row_level is off\nNOTICE: stats_block_level is off\nNOTICE: trace_notify is off\nNOTICE: hostname_lookup is off\nNOTICE: show_source_port is off\nNOTICE: sql_inheritance is on\nNOTICE: australian_timezones is off\nNOTICE: fixbtree is on\nNOTICE: password_encryption is off\nNOTICE: transform_null_equals is off\nNOTICE: geqo_threshold is 11\nNOTICE: geqo_pool_size is 0\nNOTICE: geqo_effort is 1\nNOTICE: geqo_generations is 0\nNOTICE: geqo_random_seed is -1\nNOTICE: deadlock_timeout is 1000\nNOTICE: syslog is 0\nNOTICE: max_connections is 64\nNOTICE: shared_buffers is 128\nNOTICE: port is 5432\nNOTICE: unix_socket_permissions is 511\nNOTICE: sort_mem is 1024\nNOTICE: vacuum_mem is 8192\nNOTICE: max_files_per_process is 1000\nNOTICE: debug_level is 0\nNOTICE: max_expr_depth is 10000\nNOTICE: max_fsm_relations is 100\nNOTICE: max_fsm_pages is 10000\nNOTICE: max_locks_per_transaction is 64\nNOTICE: authentication_timeout is 60\nNOTICE: pre_auth_delay is 0\nNOTICE: checkpoint_segments is 3\nNOTICE: checkpoint_timeout is 300\nNOTICE: wal_buffers is 8\nNOTICE: wal_files is 0\nNOTICE: wal_debug is 0\nNOTICE: commit_delay is 0\nNOTICE: commit_siblings is 5\nNOTICE: effective_cache_size is 1000\nNOTICE: random_page_cost is 4\nNOTICE: cpu_tuple_cost is 0.01\nNOTICE: cpu_index_tuple_cost is 0.001\nNOTICE: cpu_operator_cost is 0.0025\nNOTICE: geqo_selection_bias is 2\nNOTICE: default_transaction_isolation is read committed\nNOTICE: dynamic_library_path is $libdir\nNOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab\nNOTICE: syslog_facility is LOCAL0\nNOTICE: syslog_ident is postgres\nNOTICE: unix_socket_group is unset\nNOTICE: unix_socket_directory is unset\nNOTICE: virtual_host is unset\nNOTICE: wal_sync_method is fdatasync\nNOTICE: DateStyle is ISO with US (NonEuropean) conventions\nNOTICE: Time zone is unset\nNOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED\nNOTICE: Current client encoding is 'SQL_ASCII'\nNOTICE: Current server encoding is 'SQL_ASCII'\nNOTICE: Seed for random number generator is unavailable\nSHOW VARIABLE\nproject=#\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Justin Clift\n> Sent: Monday, October 07, 2002 2:30 PM\n> To: [email protected]\n> Cc: [email protected]; [email protected]\n> Subject: Re: [pgsql-performance] sloooow query\n>\n>\n> Josh Berkus wrote:\n> >\n> > Marie,\n> >\n> > > I am experiencing slow db performance. I have vacuumed,\n> analyzed, reindexed\n> > > using the force option and performance remains the same -\n> dog-slow :( If I\n> > > drop and recreate the database, performance is normal, so\n> this suggests a\n> > > problem with the indexes? I also took a look at the\n> postgresql.conf and all\n> > > appears fine. There are many instances of the same database\n> running on\n> > > different servers and not all servers are experiencing the problem.\n> >\n> > Please post the following:\n> > 1) A copy of the relevant portions of your database schema.\n> > 2) The query that is running slowly.\n> > 3) The results of running EXPLAIN on that query.\n> > 4) Your PostgreSQL version and operating system\n> > 5) Any other relevant information about your databases, such as\n> the quantity\n> > of inserts and deletes on the relevant tables.\n>\n> 6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and\n> wal_files settings from your postgresql.conf file, if possible.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> > --\n> > -Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\n", "msg_date": "Mon, 7 Oct 2002 14:53:35 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "\nMarie,\n\n> I pg_dumped the first database having performance problems and reloaded it\n> into a new database on the same server. The query ran normally when I\n> reloaded it. There is no difference in hardware, schema or anything else.\n\nThat's a pretty brutal query. \n\n From the comparison between the two queries, it looks like you have a lot of \ndiscarded rows cluttering up the original database, just like I did. \n\nWhat happens if you run VACUUM FULL VERBOSE on the Bad database? Does it \nreport lots of rows taken up?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Mon, 7 Oct 2002 13:12:21 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "Hi Marie,\n\nOk, not sure about the SQL side of things (got scared just *looking* at\nthat query), but if this is at least a mostly-dedicated database server\nthen you might want to bump up some of those buffer values. They look\nlike defaults (except the max_connections and shared buffers).\n\nInitial thought is making just sort_mem = 8192 or so as a minimum (it\ncould go a lot higher, but not sure of your memory configuration), as\nsee if that makes a difference.\n\nNot sure the wal_files = 0 bit is good either. Haven't seen that set to\n0 before.\n\nMight not assist with your present crisis, but am guessing PostgreSQL is\nchewing a lot of CPU and being slow in general with the present\nsettings.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\"Marie G. Tuite\" wrote:\n> \n> Here is a show all:\n> \n> Thanks,\n> \n> project-# ;\n> NOTICE: enable_seqscan is on\n> NOTICE: enable_indexscan is on\n> NOTICE: enable_tidscan is on\n> NOTICE: enable_sort is on\n> NOTICE: enable_nestloop is on\n> NOTICE: enable_mergejoin is on\n> NOTICE: enable_hashjoin is on\n> NOTICE: ksqo is off\n> NOTICE: geqo is on\n> NOTICE: tcpip_socket is on\n> NOTICE: ssl is off\n> NOTICE: fsync is on\n> NOTICE: silent_mode is off\n> NOTICE: log_connections is off\n> NOTICE: log_timestamp is off\n> NOTICE: log_pid is off\n> NOTICE: debug_print_query is off\n> NOTICE: debug_print_parse is off\n> NOTICE: debug_print_rewritten is off\n> NOTICE: debug_print_plan is off\n> NOTICE: debug_pretty_print is off\n> NOTICE: show_parser_stats is off\n> NOTICE: show_planner_stats is off\n> NOTICE: show_executor_stats is off\n> NOTICE: show_query_stats is off\n> NOTICE: stats_start_collector is on\n> NOTICE: stats_reset_on_server_start is on\n> NOTICE: stats_command_string is off\n> NOTICE: stats_row_level is off\n> NOTICE: stats_block_level is off\n> NOTICE: trace_notify is off\n> NOTICE: hostname_lookup is off\n> NOTICE: show_source_port is off\n> NOTICE: sql_inheritance is on\n> NOTICE: australian_timezones is off\n> NOTICE: fixbtree is on\n> NOTICE: password_encryption is off\n> NOTICE: transform_null_equals is off\n> NOTICE: geqo_threshold is 11\n> NOTICE: geqo_pool_size is 0\n> NOTICE: geqo_effort is 1\n> NOTICE: geqo_generations is 0\n> NOTICE: geqo_random_seed is -1\n> NOTICE: deadlock_timeout is 1000\n> NOTICE: syslog is 0\n> NOTICE: max_connections is 64\n> NOTICE: shared_buffers is 128\n> NOTICE: port is 5432\n> NOTICE: unix_socket_permissions is 511\n> NOTICE: sort_mem is 1024\n> NOTICE: vacuum_mem is 8192\n> NOTICE: max_files_per_process is 1000\n> NOTICE: debug_level is 0\n> NOTICE: max_expr_depth is 10000\n> NOTICE: max_fsm_relations is 100\n> NOTICE: max_fsm_pages is 10000\n> NOTICE: max_locks_per_transaction is 64\n> NOTICE: authentication_timeout is 60\n> NOTICE: pre_auth_delay is 0\n> NOTICE: checkpoint_segments is 3\n> NOTICE: checkpoint_timeout is 300\n> NOTICE: wal_buffers is 8\n> NOTICE: wal_files is 0\n> NOTICE: wal_debug is 0\n> NOTICE: commit_delay is 0\n> NOTICE: commit_siblings is 5\n> NOTICE: effective_cache_size is 1000\n> NOTICE: random_page_cost is 4\n> NOTICE: cpu_tuple_cost is 0.01\n> NOTICE: cpu_index_tuple_cost is 0.001\n> NOTICE: cpu_operator_cost is 0.0025\n> NOTICE: geqo_selection_bias is 2\n> NOTICE: default_transaction_isolation is read committed\n> NOTICE: dynamic_library_path is $libdir\n> NOTICE: krb_server_keyfile is FILE:/etc/pgsql/krb5.keytab\n> NOTICE: syslog_facility is LOCAL0\n> NOTICE: syslog_ident is postgres\n> NOTICE: unix_socket_group is unset\n> NOTICE: unix_socket_directory is unset\n> NOTICE: virtual_host is unset\n> NOTICE: wal_sync_method is fdatasync\n> NOTICE: DateStyle is ISO with US (NonEuropean) conventions\n> NOTICE: Time zone is unset\n> NOTICE: TRANSACTION ISOLATION LEVEL is READ COMMITTED\n> NOTICE: Current client encoding is 'SQL_ASCII'\n> NOTICE: Current server encoding is 'SQL_ASCII'\n> NOTICE: Seed for random number generator is unavailable\n> SHOW VARIABLE\n> project=#\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Justin Clift\n> > Sent: Monday, October 07, 2002 2:30 PM\n> > To: [email protected]\n> > Cc: [email protected]; [email protected]\n> > Subject: Re: [pgsql-performance] sloooow query\n> >\n> >\n> > Josh Berkus wrote:\n> > >\n> > > Marie,\n> > >\n> > > > I am experiencing slow db performance. I have vacuumed,\n> > analyzed, reindexed\n> > > > using the force option and performance remains the same -\n> > dog-slow :( If I\n> > > > drop and recreate the database, performance is normal, so\n> > this suggests a\n> > > > problem with the indexes? I also took a look at the\n> > postgresql.conf and all\n> > > > appears fine. There are many instances of the same database\n> > running on\n> > > > different servers and not all servers are experiencing the problem.\n> > >\n> > > Please post the following:\n> > > 1) A copy of the relevant portions of your database schema.\n> > > 2) The query that is running slowly.\n> > > 3) The results of running EXPLAIN on that query.\n> > > 4) Your PostgreSQL version and operating system\n> > > 5) Any other relevant information about your databases, such as\n> > the quantity\n> > > of inserts and deletes on the relevant tables.\n> >\n> > 6) And the sort_mem, shared_buffers, vacuum_mem, wal_buffers, and\n> > wal_files settings from your postgresql.conf file, if possible.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> >\n> > > --\n> > > -Josh Berkus\n> > > Aglio Database Solutions\n> > > San Francisco\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> > --\n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 08 Oct 2002 06:15:35 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "> What happens if you run VACUUM FULL VERBOSE on the Bad database? Does it\n> report lots of rows taken up?\n\nI ran the vacuum for selected tables. It looks fine, I think, but I amn't\nalways sure what I am reading in output.\n\nproject=# vacuum full verbose classes;\nNOTICE: --Relation classes--\nNOTICE: Pages 5: Changed 0, reaped 2, Empty 0, New 0; Tup 332: Vac 0,\nKeep/VTL 0/0, UnUsed 33, MinLen 93, MaxLen 117; Re-using: Free/Avail. Space\n3020/2832; EndEmpty/Avail. Pages 0/1.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index class_pkey: Pages 5; Tuples 332: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel classes: Pages: 5 --> 5; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_595650--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0;\nEndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_595650_idx: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nproject=# vacuum full verbose bm_publications;\nNOTICE: --Relation bm_publications--\nNOTICE: Pages 2: Changed 0, reaped 1, Empty 0, New 0; Tup 284: Vac 0,\nKeep/VTL 0/0, UnUsed 6, MinLen 52, MaxLen 52; Re-using: Free/Avail. Space\n416/416; EndEmpty/Avail. Pages 0/2.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index bm_publications_pkey: Pages 4; Tuples 284: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel bm_publications: Pages: 2 --> 2; Tuple(s) moved: 0.\n CPU 0.00s/0.01u sec elapsed 0.00 sec.\nVACUUM\nproject=# vacuum full verbose user_common;\nNOTICE: --Relation user_common--\nNOTICE: Pages 21: Changed 0, reaped 19, Empty 0, New 0; Tup 1045: Vac 0,\nKeep/VTL 0/0, UnUsed 103, MinLen 117, MaxLen 221; Re-using: Free/Avail.\nSpace 4080/2968; EndEmpty/Avail. Pages 0/2.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index user_common_pkey: Pages 20; Tuples 1045: Deleted 0.\n CPU 0.01s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel user_common: Pages: 21 --> 21; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_474892--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0,\nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0;\nEndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_474892_idx: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nproject=# vacuum full verbose bm_subscriptions_rlt;\nNOTICE: --Relation bm_subscriptions_rlt--\nNOTICE: Pages 1: Changed 0, reaped 1, Empty 0, New 0; Tup 114: Vac 0,\nKeep/VTL 0/0, UnUsed 1, MinLen 57, MaxLen 57; Re-using: Free/Avail. Space\n872/872; EndEmpty/Avail. Pages 0/1.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index bm_subscriptions_rlt_pkey: Pages 2; Tuples 114: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel bm_subscriptions_rlt: Pages: 1 --> 1; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nproject=#\n\n\n", "msg_date": "Mon, 7 Oct 2002 15:34:11 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "\nMarie,\n\n> I ran the vacuum for selected tables. It looks fine, I think, but I amn't\n> always sure what I am reading in output.\n\nSo much for the easy answer. The reason I wanted to see a VACUUM FULL is \nthat the query on the \"bad\" database is taking a long time to return even the \nfirst row of many of its sub-parts. This is usually the result of not \nrunning VACUUM FULL after a lot of deletions.\n\nHowever, your problem apparently is something else. Is is possible that \nthere is some kind of disk access problem for the bad database copy? Is \nthere a difference in where its files are physically located?\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 7 Oct 2002 13:44:31 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "On Mon, 2002-10-07 at 16:15, Justin Clift wrote:\n> Hi Marie,\n> \n> Not sure the wal_files = 0 bit is good either. Haven't seen that set to\n> 0 before.\n> \n\nThis is the default value, and I don't recall anything in the docs that\nwould suggest to change it. Also IIRC the back end will auto adjust the\n# of wal_files as needed in newer versions. Unless your seeing messages\nlike \"DEBUG: XLogWrite: new log file created - consider increasing\nWAL_FILES\" I think you can leave this alone. Can you point me to\nsomething that says different?\n\nRobert Treat\n\n\n", "msg_date": "07 Oct 2002 16:50:27 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "On Mon, 2002-10-07 at 16:34, Marie G. Tuite wrote:\n> > What happens if you run VACUUM FULL VERBOSE on the Bad database? Does it\n> > report lots of rows taken up?\n> \n> I ran the vacuum for selected tables. It looks fine, I think, but I amn't\n> always sure what I am reading in output.\n> \n\nIs this vacuum being done on a system that is currently running slow, or\nwas this system recently dropped/reloaded?\n\nRobert Treat\n\n\n", "msg_date": "07 Oct 2002 16:52:16 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "\"Marie G. Tuite\" <[email protected]> writes:\n> I pg_dumped the first database having performance problems and reloaded it\n> into a new database on the same server. The query ran normally when I\n> reloaded it. There is no difference in hardware, schema or anything else.\n\nHave you done an ANALYZE or VACUUM ANALYZE in either database? The\nstatistics the planner is working from seem to be quite different\nin the two plans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 16:57:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query " }, { "msg_contents": "Robert Treat wrote:\n> \n> On Mon, 2002-10-07 at 16:15, Justin Clift wrote:\n> > Hi Marie,\n> >\n> > Not sure the wal_files = 0 bit is good either. Haven't seen that set to\n> > 0 before.\n> >\n> \n> This is the default value, and I don't recall anything in the docs that\n> would suggest to change it. Also IIRC the back end will auto adjust the\n> # of wal_files as needed in newer versions. Unless your seeing messages\n> like \"DEBUG: XLogWrite: new log file created - consider increasing\n> WAL_FILES\" I think you can leave this alone. Can you point me to\n> something that says different?\n\nAhh... that makes sense. Have been doing almost nothing else recently\nexcept for setting up new PostgreSQL databases, loading in data, then\ndoing load testing for things.\n\nHave totally become so used to having wal_files being other than 0 that\nit didn't even register that this is the default. ;->\n\nSorry about that, and thanks for the heads up.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Robert Treat\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 08 Oct 2002 07:00:57 +1000", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" }, { "msg_contents": "I have analyzed, vacuumed and reindexed.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, October 07, 2002 3:58 PM\n> To: [email protected]\n> Cc: [email protected]; [email protected]\n> Subject: Re: [pgsql-performance] sloooow query \n> \n> \n> \"Marie G. Tuite\" <[email protected]> writes:\n> > I pg_dumped the first database having performance problems and \n> reloaded it\n> > into a new database on the same server. The query ran normally when I\n> > reloaded it. There is no difference in hardware, schema or \n> anything else.\n> \n> Have you done an ANALYZE or VACUUM ANALYZE in either database? The\n> statistics the planner is working from seem to be quite different\n> in the two plans.\n> \n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Mon, 7 Oct 2002 17:04:09 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query " }, { "msg_contents": "> Is this vacuum being done on a system that is currently running slow, or\n> was this system recently dropped/reloaded?\n\nCurrently slow. \n\n", "msg_date": "Mon, 7 Oct 2002 17:04:09 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "> However, your problem apparently is something else. Is is possible that \n> there is some kind of disk access problem for the bad database copy? Is \n> there a difference in where its files are physically located?\n\nBoth are in default storage - /var/lib/pgsql/data.\n\n", "msg_date": "Mon, 7 Oct 2002 17:05:41 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "> Is this vacuum being done on a system that is currently running slow, or\n> was this system recently dropped/reloaded?\n\nCurrently slow.\n\n", "msg_date": "Mon, 7 Oct 2002 17:09:46 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query" }, { "msg_contents": "\n> Have you done an ANALYZE or VACUUM ANALYZE in either database? The\n> statistics the planner is working from seem to be quite different\n> in the two plans.\n\nI have vacuumed, analysed and reindexed.\n\n\n", "msg_date": "Mon, 7 Oct 2002 17:12:41 -0500", "msg_from": "\"Marie G. Tuite\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sloooow query " }, { "msg_contents": "On Mon, Oct 07, 2002 at 02:22:09PM -0500, Marie G. Tuite wrote:\n> Hello all,\n> \n> I am experiencing slow db performance. I have vacuumed, analyzed, reindexed\n> using the force option and performance remains the same - dog-slow :( If I\n> drop and recreate the database, performance is normal, so this suggests a\n> problem with the indexes? I also took a look at the postgresql.conf and all\n> appears fine. There are many instances of the same database running on\n> different servers and not all servers are experiencing the problem.\n\nWe need more details if you wish to receive useful answers. Query/EXPLAIN\noutput/schema, etc\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Tue, 8 Oct 2002 10:03:08 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sloooow query" } ]
[ { "msg_contents": "Folks,\n\nI'm still having trouble with my massive data transformation procedures\ntaking forever to finish. Particularly, many of them will get about\n1/2 way through, and then I will start seeing this in the log:\n\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E4\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E5\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E6\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E7\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E8\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000E9\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000EA\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000EB\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000EC\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000ED\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000EE\n2002-10-08 20:37:13 DEBUG: recycled transaction log file\n0000000A000000EF\n2002-10-08 20:37:13 DEBUG: reaping dead processes\n2002-10-08 20:37:13 DEBUG: child process (pid 15270) exited with exit\ncode 0\n\n... repeat ad nauseum. The problem is, each \"recycle transaction log\n... reaping dead child process\" cycle takes about 4-7 minutes ...\nmeaning that the procedure can take up to 1/2 hour to finish, and\nsometimes not finish at all.\n\nObviously, the system is telling me that it is running out of resources\nsomehow. But I'm at my wit's end to figure out what resources,\nexactly. Suggestions?\n\n-Josh Berkus\n", "msg_date": "Tue, 08 Oct 2002 20:46:12 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "What does this tell me?" }, { "msg_contents": "\nI think all it means is that is doesn't need some of the pg_clog files\nand is reusing them, basically meaning you are pushing through lots of\ntransactions. I don't see it as a problem.\n\n---------------------------------------------------------------------------\n\nJosh Berkus wrote:\n> Folks,\n> \n> I'm still having trouble with my massive data transformation procedures\n> taking forever to finish. Particularly, many of them will get about\n> 1/2 way through, and then I will start seeing this in the log:\n> \n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E4\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E5\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E6\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E7\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E8\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E9\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EA\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EB\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EC\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000ED\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EE\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EF\n> 2002-10-08 20:37:13 DEBUG: reaping dead processes\n> 2002-10-08 20:37:13 DEBUG: child process (pid 15270) exited with exit\n> code 0\n> \n> ... repeat ad nauseum. The problem is, each \"recycle transaction log\n> ... reaping dead child process\" cycle takes about 4-7 minutes ...\n> meaning that the procedure can take up to 1/2 hour to finish, and\n> sometimes not finish at all.\n> \n> Obviously, the system is telling me that it is running out of resources\n> somehow. But I'm at my wit's end to figure out what resources,\n> exactly. Suggestions?\n> \n> -Josh Berkus\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 8 Oct 2002 23:49:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E4\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E5\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E6\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E7\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E8\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000E9\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EA\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EB\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EC\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000ED\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EE\n> 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> 0000000A000000EF\n> 2002-10-08 20:37:13 DEBUG: reaping dead processes\n> 2002-10-08 20:37:13 DEBUG: child process (pid 15270) exited with exit\n> code 0\n> \n> ... repeat ad nauseum. The problem is, each \"recycle transaction\n> log ... reaping dead child process\" cycle takes about 4-7 minutes\n> ... meaning that the procedure can take up to 1/2 hour to finish,\n> and sometimes not finish at all.\n> \n> Obviously, the system is telling me that it is running out of\n> resources somehow. But I'm at my wit's end to figure out what\n> resources, exactly. Suggestions?\n\nYou're running out of WAL log space, iirc. Increase the number of WAL\nlogs available and you should be okay. If you're experiencing this\nhalfway through, I'd increase the size by 50%, say maybe 60-70% for\ngood measure. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Tue, 8 Oct 2002 20:50:43 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "Sean Chittenden wrote:\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E4\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E5\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E6\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E7\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E8\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000E9\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000EA\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000EB\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000EC\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000ED\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000EE\n> > 2002-10-08 20:37:13 DEBUG: recycled transaction log file\n> > 0000000A000000EF\n> > 2002-10-08 20:37:13 DEBUG: reaping dead processes\n> > 2002-10-08 20:37:13 DEBUG: child process (pid 15270) exited with exit\n> > code 0\n> > \n> > ... repeat ad nauseum. The problem is, each \"recycle transaction\n> > log ... reaping dead child process\" cycle takes about 4-7 minutes\n> > ... meaning that the procedure can take up to 1/2 hour to finish,\n> > and sometimes not finish at all.\n> > \n> > Obviously, the system is telling me that it is running out of\n> > resources somehow. But I'm at my wit's end to figure out what\n> > resources, exactly. Suggestions?\n> \n> You're running out of WAL log space, iirc. Increase the number of WAL\n> logs available and you should be okay. If you're experiencing this\n> halfway through, I'd increase the size by 50%, say maybe 60-70% for\n> good measure. -sc\n\nOh, yes, you are right. My hardware tuning guide mentions it. Strange\nit is called the transaction log file:\n\n\thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n\nUnless you are seeing this more freqently than every minute, it should\nbe fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 8 Oct 2002 23:55:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "Bruce, Sean,\n\n> Oh, yes, you are right. My hardware tuning guide mentions it.\n> Strange\n> it is called the transaction log file:\n> \n> http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> Unless you are seeing this more freqently than every minute, it\n> should\n> be fine.\n\nActually, it's apparently a real problem, because the function never\ncompletes. Each cycle of \"recycling transaction logs\" takes longer\nand longer, and eventually locks up completely.\n\nWhat the function is doing is a succession of data cleanup procedures,\nupdating the same table about 50 times. I will be very thankful for\nthe day when I can commit within a procedure.\n\nUnfortunately, I am already at the maximum number of WAL files (64).\n What do I do now?\n\n-Josh Berkus\n\n", "msg_date": "Tue, 08 Oct 2002 21:01:58 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "Josh Berkus wrote:\n> Bruce, Sean,\n> \n> > Oh, yes, you are right. My hardware tuning guide mentions it.\n> > Strange\n> > it is called the transaction log file:\n> > \n> > http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> > \n> > Unless you are seeing this more freqently than every minute, it\n> > should\n> > be fine.\n> \n> Actually, it's apparently a real problem, because the function never\n> completes. Each cycle of \"recycling transaction logs\" takes longer\n> and longer, and eventually locks up completely.\n> \n> What the function is doing is a succession of data cleanup procedures,\n> updating the same table about 50 times. I will be very thankful for\n> the day when I can commit within a procedure.\n> \n> Unfortunately, I am already at the maximum number of WAL files (64).\n> What do I do now?\n\nWow, that is interesting. I thought one big transaction wouldn't lock\nup the WAL records. I figured there would be a CHECKPOINT, and then the\nWAL records could be recycled, even though the transaction is still\nopen.\n\nWhere do you see 64 as the maximum number of WAL segments. What is your\ncheckpoint_segments value? The actual number of files shouldn't be much\nmore than twice that value. What PostgreSQL version are you using?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 00:07:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "> > Oh, yes, you are right. My hardware tuning guide mentions it.\n> > Strange it is called the transaction log file:\n> > \n> > http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> > \n> > Unless you are seeing this more freqently than every minute, it\n> > should\n> > be fine.\n> \n> Actually, it's apparently a real problem, because the function never\n> completes. Each cycle of \"recycling transaction logs\" takes longer\n> and longer, and eventually locks up completely.\n> \n> What the function is doing is a succession of data cleanup\n> procedures, updating the same table about 50 times. I will be very\n> thankful for the day when I can commit within a procedure.\n> \n> Unfortunately, I am already at the maximum number of WAL files (64).\n> What do I do now?\n\nIsn't it possible to increase the size of your wal logs? I seem to\nremember a tunable existing, but I can't find it in the default\nconfig. Someone else know how off the top of their head? -sc\n\n-- \nSean Chittenden\n", "msg_date": "Tue, 8 Oct 2002 21:17:50 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "Josh Berkus wrote:\n> What the function is doing is a succession of data cleanup procedures,\n> updating the same table about 50 times. I will be very thankful for\n> the day when I can commit within a procedure.\n\nIf that's the case, can you split the work up into multiple functions, and \nexecute them all from a shell script? Or perhaps even offload some of the data \nmassaging to perl or something? (It would be easier to recommend alternate \napproaches with more details.)\n\nJoe\n\n\n", "msg_date": "Tue, 08 Oct 2002 21:55:21 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n> Actually, it's apparently a real problem, because the function never\n> completes. Each cycle of \"recycling transaction logs\" takes longer\n> and longer, and eventually locks up completely.\n> What the function is doing is a succession of data cleanup procedures,\n> updating the same table about 50 times. I will be very thankful for\n> the day when I can commit within a procedure.\n\nI think you are barking up the wrong tree.\n\nThe messages you show are perfectly normal operation, and prove nothing\nmuch except that you pumped a lot of database updates through the\nsystem. I think there's something wrong with your data transformation\napplication logic; or perhaps you are pumping so many updates through\nyour tables that you need some intermediate VACUUMs to get rid of\ndead tuples. But messing with the WAL log parameters isn't going to\ndo a darn thing for you ... IMHO anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 01:22:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me? " }, { "msg_contents": "Joe,\n\n> If that's the case, can you split the work up into multiple\n> functions, and execute them all from a shell script? Or perhaps even\n> offload some of the data massaging to perl or something? (It would be\n> easier to recommend alternate approaches with more details.)\n\nI've already split it up into 11 functions, which are being managed\nthrough Perl with ANALYZE statements between. Breaking it down\nfurther would be really unmanageable.\n\nNot to be mean or anything (after all, I just joined pgsql-advocacy),\nI'm getting *much* worse performance on large data transformations from\nPostgreSQL 7.2.1, than I get from SQL Server 7.0 on inferior hardware\n(at least, except where SQL Server 7.0 crashes). I really am determined\nto prove that it's because I've misconfigured it, and I thank all of\nyou for your help in doing so.\n\nPGBench Results:\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 100\nnumber of transactions per client: 10\nnumber of transactions actually processed: 1000/1000\ntps = 93.206356(including connections establishing)\ntps = 103.237007(excluding connections establishing)\n\nOf course, I don't have much to compare these to, so I don't know if\nthat's good or bad.\n\n-Josh Berkus\n\n \n", "msg_date": "Tue, 08 Oct 2002 22:22:37 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "On Wed, 2002-10-09 at 01:22, Josh Berkus wrote:\n> Joe,\n> \n> > If that's the case, can you split the work up into multiple\n> > functions, and execute them all from a shell script? Or perhaps even\n> > offload some of the data massaging to perl or something? (It would be\n> > easier to recommend alternate approaches with more details.)\n> \n> I've already split it up into 11 functions, which are being managed\n> through Perl with ANALYZE statements between. Breaking it down\n> further would be really unmanageable.\n> \n\nIf I read Tom's suggestion correctly, you should probably change these\nto vacuum analyze instead of analyze. \n\n> Not to be mean or anything (after all, I just joined pgsql-advocacy),\n> I'm getting *much* worse performance on large data transformations from\n> PostgreSQL 7.2.1, than I get from SQL Server 7.0 on inferior hardware\n> (at least, except where SQL Server 7.0 crashes). \n\nwhat?? that's blasphamy!! revoke this mans advocacy membership right\nnow!! ;-)\n\n\nI really am determined\n> to prove that it's because I've misconfigured it, and I thank all of\n> you for your help in doing so.\n> \n\nFWIW I just ran into a similar situation where I was doing 6\nsimultaneous pg_restores of our production database on my local\nworkstation. Apparently this pumps a lot of data through the wal logs. \nI did kick up the number of wal files, but I also ended up kicking up\nthe number of wal_buffers as well and that seemed to help. \n\nRobert Treat\n\n\n", "msg_date": "09 Oct 2002 09:57:18 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" } ]
[ { "msg_contents": "Bruce,\n\n> > First, an interesting wierdness from the VACUUM FULL ANALYZE:\n> > Analyzing elbs_clidesc\n> > 2002-10-08 21:08:08 DEBUG: SIInsertDataEntry: table is 70% full,\n> > signaling postmaster\n> > \n> > Huh?\n> \n> Well, you are dealing with elbs. That is the problem. ;-)\n\n<grin> As you probably guessed, the purpose of these procedures is to\ntake a large amount (about 60mb) of not-normalized data from ELBS and\nnormalize it for our web-based case management system. \n\nWhat's really frustrating about it is that we're only going to be doing\nthis for 2-3 months before we jettison ELBS for reasons that should be\nobvious to you. But for those 2-3 months, the data transfer needs to\nwork well, and right now it doesn't even finish.\n\n> You shoulnd't need that and it shouldn't lock up when it gets to 64.\n> It\n> should checkpoint and move on. The only problem with it being lower\n> is\n> that it will checkpoint more often.\n\nWell, I'll try 128 and see if that helps any.\n\n> \n> > Rest of postgresql.conf params after my signature. All\n> suggestions\n> > are welcome. This server has been acting \"sick\" since I started\n> with\n> > it, under-performing my workstation and MS SQL Server. Either I've\n> set\n> > something wrong, or there's a hardware problem I need to track\n> down.\n> > \n> > BTW, is there any problem for postgres in turning the fill access\n> time\n> > recorder in the host filesystem off? This is often good for a\n> minor\n> > performance gain.\n> \n> No problem.\n> \n> You might want to try pgbench and see if that works.\n\nYeah. I was planning on that -- as well as the postgresql.conf tuner\n-- as soon as I can get through one data transfer so that I have a\nlittle working time.\n\n-Josh Berkus\n", "msg_date": "Tue, 08 Oct 2002 21:33:52 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What does this tell me?" }, { "msg_contents": "Josh Berkus wrote:\n> > > Rest of postgresql.conf params after my signature. All\n> > suggestions\n> > > are welcome. This server has been acting \"sick\" since I started\n> > with\n> > > it, under-performing my workstation and MS SQL Server. Either I've\n> > set\n> > > something wrong, or there's a hardware problem I need to track\n> > down.\n> > > \n> > > BTW, is there any problem for postgres in turning the fill access\n> > time\n> > > recorder in the host filesystem off? This is often good for a\n> > minor\n> > > performance gain.\n> > \n> > No problem.\n> > \n> > You might want to try pgbench and see if that works.\n> \n> Yeah. I was planning on that -- as well as the postgresql.conf tuner\n> -- as soon as I can get through one data transfer so that I have a\n> little working time.\n\nI was suggesting pgbench because the system should never lock up on you.\nMaybe something is very wrong.\n\nWhat happens if you issue the CHECKPOINT command?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 00:38:10 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What does this tell me?" } ]
[ { "msg_contents": "subscribe pgsql-performance\n\n", "msg_date": "Wed, 9 Oct 2002 19:32:19 UT", "msg_from": "\"Rich Scott\" <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Hi,\n\nI am trying to optimise a complex query adding explicit joins and creating\nindices. I am going through the EXPLAIN output (~70 lines) but am fairly new\nat reading these. \n\nAre there any good rules of thumb of things one should be looking out for in\nEXPLAIN output? ie <blah> means that an index would be good here etc\n\nI have read through the docs for EXPLAIN, but I was wondering if there were\nany more detailed descriptions or docs on the subject.\n\nThanks for any help\n\nadam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Thu, 10 Oct 2002 16:41:43 +0100", "msg_from": "Adam Witney <[email protected]>", "msg_from_op": true, "msg_subject": "Info on explain output" }, { "msg_contents": "\nHave you looked at the internals PDF at the bottom of the developers\nlounge web page?\n\n---------------------------------------------------------------------------\n\nAdam Witney wrote:\n> Hi,\n> \n> I am trying to optimise a complex query adding explicit joins and creating\n> indices. I am going through the EXPLAIN output (~70 lines) but am fairly new\n> at reading these. \n> \n> Are there any good rules of thumb of things one should be looking out for in\n> EXPLAIN output? ie <blah> means that an index would be good here etc\n> \n> I have read through the docs for EXPLAIN, but I was wondering if there were\n> any more detailed descriptions or docs on the subject.\n> \n> Thanks for any help\n> \n> adam\n> \n> \n> -- \n> This message has been scanned for viruses and\n> dangerous content by MailScanner, and is\n> believed to be clean.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 10 Oct 2002 12:44:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Info on explain output" }, { "msg_contents": "Adam,\n\n> > Are there any good rules of thumb of things one should be looking\n> out for in\n> > EXPLAIN output? ie <blah> means that an index would be good here\n> etc\n\nAlso try:\n1) Various articles on Techdocs.postgresql.org\n2) Ewald G.'s PostgreSQL Book\n\nExplain output is not that easily converted into a plan of action ...\notherwise, Postgres would have automated it, neh? You have to get a\nfeel for what looks good and bad dynamically.\n\n-Josh Berkus\n", "msg_date": "Thu, 10 Oct 2002 09:59:18 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Info on explain output" } ]
[ { "msg_contents": "Is it possible to get rid of the \"t_natts\" fields in the tuple header? Is this field only for \"alter table add/drop\" support? Then it might\npossible to get rid of it and put the \"t_natts\" field in the page header, not the tuple header, if it can be assured that when updating/inserting\nrecords only a compatible (a page file with the same number of attributes) page file is used. Especially master-detail tables would \nprofit from this, reducing the tuple overhead by another 9%.\n\nMight this be possible?\n\nRegards,\n\tMario Weilguni\n\n\n\n", "msg_date": "Fri, 11 Oct 2002 09:14:50 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "number of attributes in page files?" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> Is it possible to get rid of the \"t_natts\" fields in the tuple header?\n> Is this field only for \"alter table add/drop\" support?\n\n\"Only\"? A lot of people consider that pretty important ...\n\nBut removing 2 bytes isn't going to save anything, on most machines,\nbecause of alignment considerations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Oct 2002 08:12:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] number of attributes in page files? " }, { "msg_contents": "Am Freitag, 11. Oktober 2002 14:12 schrieb Tom Lane:\n> Mario Weilguni <[email protected]> writes:\n> > Is it possible to get rid of the \"t_natts\" fields in the tuple header?\n> > Is this field only for \"alter table add/drop\" support?\n>\n> \"Only\"? A lot of people consider that pretty important ...\n\nWith \"only\" I mean it's an administrative task which requires operator intervenation anyways, and it's a seldom needed operation which may take longer, when\nqueries become faster.\n\n>\n> But removing 2 bytes isn't going to save anything, on most machines,\n> because of alignment considerations.\n\nok, I did not consider alignment, but the question remains, is this easily doable? Especially because only one another byte has to be saved for\nreal saving on many architectures, which is t_hoff. IMO t_hoff is not useful because it can be computed easily. This would give 20 byte headers instead of 23 (24) bytes as it's now. \nThis is 17% saved, and if it's not too complicated it might be worth to consider.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Fri, 11 Oct 2002 16:00:13 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] number of attributes in page files?" } ]
[ { "msg_contents": "If you want to get the max (CPU) performance and use gcc, you should give the -fprofile-arcs / -fbranch-probabilties options of gcc 3.2 a try. \nFor 50 pgbench read-only runs (1 mio tuples, 40000 txs, 10 clients) I get 14.4% speedup. \n\nThen I tried it with real data from our production system.\nThis is 2GB data, 120 tables, but most of the data is large object data (1.8GB), so most tables of the database are in-memory and the application is more cpu bound.\nWith this scenario, I still get 8% improvement. \n\nAll tests done on an Athlon XP/1500, 768MB RAM, Linux 2.4.19, gcc 3.2, 5400 RPM Maxtor.\n\nMight be worth a try. Probably the performance win will be smaller for larger databases.\n\nRegards,\n\tMario Weilguni\n\n\n", "msg_date": "Fri, 11 Oct 2002 10:44:03 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "Compile test with gcc 3.2" }, { "msg_contents": "Hi Mario:\n--- Mario Weilguni <[email protected]> wrote:\n> If you want to get the max (CPU) performance and use\n> gcc, you should give the -fprofile-arcs /\n> -fbranch-probabilties options of gcc 3.2 a try. \n> For 50 pgbench read-only runs (1 mio tuples, 40000\n> txs, 10 clients) I get 14.4% speedup. \n> \n> Then I tried it with real data from our production\n> system.\n> This is 2GB data, 120 tables, but most of the data\n> is large object data (1.8GB), so most tables of the\n> database are in-memory and the application is more\n> cpu bound.\n> With this scenario, I still get 8% improvement. \n> \n> All tests done on an Athlon XP/1500, 768MB RAM,\n> Linux 2.4.19, gcc 3.2, 5400 RPM Maxtor.\n> \n> Might be worth a try. Probably the performance win\n> will be smaller for larger databases.\n\n - Do you use the \"-fprofile-arcs \n-fbranch-probabilties\" options with other optimization\nflags? I've in a book (Optimimizing Red Hat Linux 6.2)\nthat one could also optimize speed by setting the\nCFLAGS to the following : \n \" -02 -fomit-frame-pointers -funroll-loops\" and\n running \"strip\" on the on binaries after they are\ncompiled.\n\nregards,\nludwig.\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Fri, 11 Oct 2002 07:34:58 -0700 (PDT)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile test with gcc 3.2" }, { "msg_contents": "except when you run the regression tests you get some very interesting results.\n\nOn Fri, 11 Oct 2002 07:34:58 -0700 (PDT)\nLudwig Lim <[email protected]> wrote:\n\n> \n> - Do you use the \"-fprofile-arcs \n> -fbranch-probabilties\" options with other optimization\n> flags? I've in a book (Optimimizing Red Hat Linux 6.2)\n> that one could also optimize speed by setting the\n> CFLAGS to the following : \n> \" -02 -fomit-frame-pointers -funroll-loops\" and\n> running \"strip\" on the on binaries after they are\n> compiled.\n> \n", "msg_date": "Fri, 11 Oct 2002 15:24:29 -0400", "msg_from": "Vincent Janelle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Compile test with gcc 3.2" } ]
[ { "msg_contents": "I need to syncronize two pgsql databases runing on RedHat linux7.2.\nOne is on our LAN and other one on the internet which we connect through a dialup connection.\n\nWaruna\n\n\n\n\n\n\n\nI need to syncronize two pgsql databases runing on \nRedHat linux7.2.\nOne is on our LAN and other one on the internet \nwhich we connect through a dialup connection.\n \nWaruna", "msg_date": "Fri, 11 Oct 2002 22:02:31 +0600", "msg_from": "\"Waruna Geekiyanage\" <[email protected]>", "msg_from_op": true, "msg_subject": "syncronize databases" } ]
[ { "msg_contents": "Folks, Justin,\n\nHey, I've been tinkering with PG_autotune in an effort to make it usable on my \ninstallation. \nhttp://gborg.postgresql.org/project/pgautotune/projdisplay.php\n\nFirst off, thank you Justin for getting inspired and writing the starter \nversion. This is something that would probably have remained *way* down the \nPostgres TODO list, were it not for you. \n\nSince it's such a great idea, I'd like to make it bulletproof so that it can \nbecome part of the standard Postgres distribution. I'm hoping that people \non this list can help.\n\nProblems, Bugs, & Suggestions:\n1) The program makes the assumption that the Postgres superuser is named \n\"pgsql\", forcing me to do a search-and-replace on the source to make it work \nat all on my system, where the superuser is named \"postgres\". This should \nbe a configuration option. Places I've identified where this is an issue: \na. the connection to the \"metrics\" database, b. the calls to Postgres \nexecutables (which are also sometimes made as the console user, causing them \nto fail if you run the program as \"root\").\n\n2) The program also assumes that all Postgres binaries are symlinked in \n/usr/local/bin. Since this symlinking isn't done by Postgres-make-install, \nwouldn't it be better to reference $PGHOME/bin? \n\n3) For that matter, it would be nice if the program would test $PGDATA and \n$PGHOME, and prompt the user if they are empty.\n\n4) The shell scripts need to have error-checking so that they exit if anything \nblows up. I can write this if Justin can explain what the shell scripts are \nsupposed to do, exactly, and where errors are acceptable.\n\n5) We need installation docs. I can write these. Sometime soon, really!\n\nQuestions & Suggestions for Enhancement:\n\n6) The shared_buffers param is capped at 500. Isn't this awfully low for a \nproduction server? What's the logic here?\n\n7) Any ideas on how to get around/adjust memory maximums for the host OS? \nThis is easy on Linux, but other *nixes are not so easy.\n\n8) What will be the difficulties in expanding the script to adjust more \nPostgresql.conf params, such as checkpoint_segments? Can we use feedback \nfrom the log to adjust these?\n\n9) I *love* the idea of letting the benchmarking script run custom queries. \nHowever, I would dearly like to expand it, letting it randomly grab from a \nlist of 10 custom queries entered by the user into a file or files. This \nwould allow the user to create a realistic mix of simple and complex queries, \nincluding some data manipulation and procedures.\n\n10) Can we eventually adjust the program to get feedback from system tools and \ngive the user hints on hardware limitations? For example, have the program \ntest if, at maximum settings, queries are slow but CPU and RAM are only 10% \nutilized and tell the user \"Your hard drives are probably too slow\"?\n\nI can help with: documentation, shell scripting, Linux system issues. Other \nvolunteers to help?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 15 Oct 2002 11:00:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "PG_Autotune 0.1" }, { "msg_contents": "Josh Berkus wrote:\n> \n> Folks, Justin,\n> \n> Hey, I've been tinkering with PG_autotune in an effort to make it usable on my\n> installation.\n> http://gborg.postgresql.org/project/pgautotune/projdisplay.php\n> \n> First off, thank you Justin for getting inspired and writing the starter\n> version. This is something that would probably have remained *way* down the\n> Postgres TODO list, were it not for you.\n\nThats cool. :)\n\n> Since it's such a great idea, I'd like to make it bulletproof so that it can\n> become part of the standard Postgres distribution. I'm hoping that people\n> on this list can help.\n\nHopefully. :)\n \n> Problems, Bugs, & Suggestions:\n> 1) The program makes the assumption that the Postgres superuser is named\n> \"pgsql\", forcing me to do a search-and-replace on the source to make it work\n> at all on my system, where the superuser is named \"postgres\". This should\n> be a configuration option. Places I've identified where this is an issue:\n> a. the connection to the \"metrics\" database, b. the calls to Postgres\n> executables (which are also sometimes made as the console user, causing them\n> to fail if you run the program as \"root\").\n\nGood point. It was developed on FreeBSD, and the PostgreSQL superuser\non FreeBSD (using the default installation method) is called \"pgsql\". \nOn at least Solaris and Linux the most common name for the superuser\nappears to be \"postgres\".\n\n> 2) The program also assumes that all Postgres binaries are symlinked in\n> /usr/local/bin. Since this symlinking isn't done by Postgres-make-install,\n> wouldn't it be better to reference $PGHOME/bin?\n\nYep.\n\n> 3) For that matter, it would be nice if the program would test $PGDATA and\n> $PGHOME, and prompt the user if they are empty.\n\nGood point.\n\n> 4) The shell scripts need to have error-checking so that they exit if anything\n> blows up. I can write this if Justin can explain what the shell scripts are\n> supposed to do, exactly, and where errors are acceptable.\n\nOk, no problem.\n\nThere are really only two shell scripts and a template. Forgot to\ninclude the template i n the downloadable version. :( Have to fix that\nsoon.\n\nThe shell scripts all reside in the $PGDATA directory and work like\nthis:\n\n1) The main shell script is called by the pg_autotune executable, and\nall it does is adjust the settings for a couple of variables in the\npostgresql.conf file. At present the variables it adjusts are\nmax_connections, sort_mem, vacuum_mem and shared_buffers. Others could\ndefinitely be added in, but this is a start. The method used for\nadjusting the variables is to have a template postgresql.conf file with\ntokens for the settings to be replaced, and then parsing them with sed\nor awk or something. Can't remember offhand how, but I do remember it\nwas a quick&ugly hack. :-/ Needs to be done properly down the track.\n\n2) Restarts the PostgreSQL database (pg_ctl stop; pg_ctl start). Sure,\nlong approach, but it works reliably. :)\n\n3) The second shell script exists only to catch the output from the\n\"pg_ctl start\" command and then exit, as if you don't pipe the output to\na valid process then the \"pg_ctl start\" doesn't appear to work\nproperly. This was the only way I could see that would consistently\nwork and not leave open filehandles around.\n\n> 5) We need installation docs. I can write these. Sometime soon, really!\n\nCool. Lets do it. :)\n\n\n> Questions & Suggestions for Enhancement:\n> \n> 6) The shared_buffers param is capped at 500. Isn't this awfully low for a\n> production server? What's the logic here?\n\nThey're just values that were ok to test with whilst making the program\nwork.\n\n> 7) Any ideas on how to get around/adjust memory maximums for the host OS?\n> This is easy on Linux, but other *nixes are not so easy.\n\nProbably the best approach initially is to detect memory failure related\nerrors where possible and then advise the user how to adjust them. \nPointing to the relevant section of the PostgreSQL manual in the\nPostgreSQL Interactive Docs might be the way to go here.\n\n\n> 8) What will be the difficulties in expanding the script to adjust more\n> Postgresql.conf params, such as checkpoint_segments? Can we use feedback\n> from the log to adjust these?\n\nGood idea. As this tool is a reasonably brute force tester, the more\nparameters added could increase the time needed for testing, unless\nsomeone comes up with some bright ideas. :)\n\n> 9) I *love* the idea of letting the benchmarking script run custom queries.\n> However, I would dearly like to expand it, letting it randomly grab from a\n> list of 10 custom queries entered by the user into a file or files. This\n> would allow the user to create a realistic mix of simple and complex queries,\n> including some data manipulation and procedures.\n\nHey good idea. The section of the code in place for letting the user\nrun custom queries isn't yet finished, but it wouldn't take a\nhalf-decent coder long to do.\n\n\n> 10) Can we eventually adjust the program to get feedback from system tools and\n> give the user hints on hardware limitations? For example, have the program\n> test if, at maximum settings, queries are slow but CPU and RAM are only 10%\n> utilized and tell the user \"Your hard drives are probably too slow\"?\n\nVery good thought, and very worthwhile. Any idea how to start with it?\n\n\n> I can help with: documentation, shell scripting, Linux system issues. Other\n> volunteers to help?\n\nHopefully.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 31 Oct 2002 12:38:45 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG_Autotune 0.1" } ]
[ { "msg_contents": "Heres an oddity. Why would it take more time to not find an answer than it would to find one? \nHere are my 2 queries.\nThe Cold Fusion output of the query is followed by an explain analyze.\n\nmaxTime (Records=0, Time=2223ms)\nSQL = \nselect cr.start_time as max\n\t\t\t\tfrom call_results cr, timezone tz, lists l\n\t\t\t\twhere (cr.start_time between '10/15/2002 08:00' and '10/15/2002 23:00')\n\t\t\t\tand l.full_phone = cr.phonenum\n\t\t\t\tand l.area_code = tz.area_code\n\t\t\t\tand tz.greenwich = '-7'\n\t\t\t\tand cr.project_id = 11\n\t\t\t\tand l.client_id = 8 \n\t\t\t\torder by cr.start_time desc\n\t\t\t\tlimit 1\n\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..1544.78 rows=1 width=49) (actual time=2299.11..2299.11 rows=0 loops=1)\n -> Nested Loop (cost=0.00..1266550.38 rows=820 width=49) (actual time=2299.10..2299.10 rows=0 loops=1)\n -> Nested Loop (cost=0.00..776978.04 rows=90825 width=42) (actual time=0.84..1849.97 rows=9939 loops=1)\n -> Index Scan Backward using start_time_idx on call_results cr (cost=0.00..6569.39 rows=6693 width=22) (actual time=0.38..303.58 rows=9043 loops=1)\n -> Index Scan using full_phone_idx on lists l (cost=0.00..114.94 rows=14 width=20) (actual time=0.15..0.16 rows=1 loops=9043)\n -> Index Scan using area_code_idx on timezone tz (cost=0.00..5.38 rows=1 width=7) (actual time=0.04..0.04 rows=0 loops=9939)\nTotal runtime: 2300.55 msec\n\n\nmaxTime (Records=1, Time=10ms)\nSQL = \nselect cr.start_time as max\n\t\t\t\tfrom call_results cr, timezone tz, lists l\n\t\t\t\twhere (cr.start_time between '10/15/2002 08:00' and '10/15/2002 23:00')\n\t\t\t\tand l.full_phone = cr.phonenum\n\t\t\t\tand l.area_code = tz.area_code\n\t\t\t\tand tz.greenwich = '-8'\n\t\t\t\tand cr.project_id = 11\n\t\t\t\tand l.client_id = 8 \n\t\t\t\torder by cr.start_time desc\n\t\t\t\tlimit 1\n NOTICE: QUERY PLAN:\n\nLimit (cost=0.00..331.03 rows=1 width=49) (actual time=1.19..1.53 rows=1 loops=1)\n -> Nested Loop (cost=0.00..1266550.38 rows=3826 width=49) (actual time=1.19..1.52 rows=2 loops=1)\n -> Nested Loop (cost=0.00..776978.04 rows=90825 width=42) (actual time=0.84..1.10 rows=2 loops=1)\n -> Index Scan Backward using start_time_idx on call_results cr (cost=0.00..6569.39 rows=6693 width=22) (actual time=0.39..0.48 rows=2 loops=1)\n -> Index Scan using full_phone_idx on lists l (cost=0.00..114.94 rows=14 width=20) (actual time=0.30..0.30 rows=1 loops=2)\n -> Index Scan using area_code_idx on timezone tz (cost=0.00..5.38 rows=1 width=7) (actual time=0.19..0.20 rows=1 loops=2)\nTotal runtime: 1.74 msec\n\n\n\n\n\n\n\n\nHeres an oddity.  Why would it take more time to not find an answer \nthan it would to find one?  \nHere are my 2 queries.\nThe Cold Fusion output of the query is followed by \nan explain analyze.\n \nmaxTime (Records=0, Time=2223ms)SQL = select \ncr.start_time as max\t\t\t\tfrom call_results cr, timezone tz, lists \nl\t\t\t\twhere (cr.start_time between '10/15/2002 08:00' and '10/15/2002 \n23:00')\t\t\t\tand l.full_phone = cr.phonenum\t\t\t\tand l.area_code = \ntz.area_code\t\t\t\tand tz.greenwich = '-7'\t\t\t\tand cr.project_id = \n11\t\t\t\tand l.client_id = 8 \t\t\t\torder by cr.start_time desc\t\t\t\tlimit \n1\n \nNOTICE:  QUERY PLAN:\n \nLimit  (cost=0.00..1544.78 rows=1 width=49) \n(actual time=2299.11..2299.11 rows=0 loops=1)  ->  Nested \nLoop  (cost=0.00..1266550.38 rows=820 width=49) (actual \ntime=2299.10..2299.10 rows=0 \nloops=1)        ->  Nested \nLoop  (cost=0.00..776978.04 rows=90825 width=42) (actual time=0.84..1849.97 \nrows=9939 \nloops=1)              \n->  Index Scan Backward using start_time_idx on call_results cr  \n(cost=0.00..6569.39 rows=6693 width=22) (actual time=0.38..303.58 rows=9043 \nloops=1)              \n->  Index Scan using full_phone_idx on lists l  (cost=0.00..114.94 \nrows=14 width=20) (actual time=0.15..0.16 rows=1 \nloops=9043)        ->  Index Scan \nusing area_code_idx on timezone tz  (cost=0.00..5.38 rows=1 width=7) \n(actual time=0.04..0.04 rows=0 loops=9939)Total runtime: 2300.55 \nmsec\nmaxTime (Records=1, Time=10ms)SQL = select \ncr.start_time as max\t\t\t\tfrom call_results cr, timezone tz, lists \nl\t\t\t\twhere (cr.start_time between '10/15/2002 08:00' and '10/15/2002 \n23:00')\t\t\t\tand l.full_phone = cr.phonenum\t\t\t\tand l.area_code = \ntz.area_code\t\t\t\tand tz.greenwich = '-8'\t\t\t\tand cr.project_id = \n11\t\t\t\tand l.client_id = 8 \t\t\t\torder by cr.start_time desc\t\t\t\tlimit \n1 NOTICE:  QUERY PLAN:\n \nLimit  (cost=0.00..331.03 rows=1 width=49) (actual time=1.19..1.53 \nrows=1 loops=1)  ->  Nested Loop  (cost=0.00..1266550.38 \nrows=3826 width=49) (actual time=1.19..1.52 rows=2 \nloops=1)        ->  Nested \nLoop  (cost=0.00..776978.04 rows=90825 width=42) (actual time=0.84..1.10 \nrows=2 \nloops=1)              \n->  Index Scan Backward using start_time_idx on call_results cr  \n(cost=0.00..6569.39 rows=6693 width=22) (actual time=0.39..0.48 rows=2 \nloops=1)              \n->  Index Scan using full_phone_idx on lists l  (cost=0.00..114.94 \nrows=14 width=20) (actual time=0.30..0.30 rows=1 \nloops=2)        ->  Index Scan \nusing area_code_idx on timezone tz  (cost=0.00..5.38 rows=1 width=7) \n(actual time=0.19..0.20 rows=1 loops=2)Total runtime: 1.74 \nmsec", "msg_date": "Thu, 17 Oct 2002 08:45:07 -0600", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Max time queries" }, { "msg_contents": "\"Chad Thompson\" <[email protected]> writes:\n> Heres an oddity. Why would it take more time to not find an answer than it\n> would to find one?\n\nBecause the successful query stops as soon as it's exhausted the LIMIT\n(ie, after it's found the first matching combination of rows). The\nfailing query has to run through the whole tables looking in vain for\na match. Note the difference in number of rows scanned in the lower\nlevels of your query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Oct 2002 21:37:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Max time queries " } ]
[ { "msg_contents": "Something that might be worth considering:\n\nMany of the performance problems on pgsql-general seem to be related to the fact that no analyze is performed after the creation on the tables, so\nmaybe this might be an option to fix that (in future releases): when a table has no statistics at all, and the first seq-scan on the table is\nperformed, it might improve further performance if this seq-scan is used to get table statistics too. This should not be too expensive since reading the\ntable has to be done only once. Further queries will have at least preliminary statistics at hand.\n\nI'm not sure how (CPU) expensive statistic-gathering is, but if most of the work is reading the tuples, it might be a win to do this.\n\nRegards,\n\tMario Weilguni\n\n\n\n", "msg_date": "Mon, 21 Oct 2002 08:23:33 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": true, "msg_subject": "Self-generating statistics?" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> Many of the performance problems on pgsql-general seem to be related\n> to the fact that no analyze is performed after the creation on the\n> tables\n\nWell, there are lots of other ways an incompetent DBA can screw up a\ndatabase. The need to VACUUM and ANALYZE is stated clearly in the\ndocs. Providing workarounds for negligence isn't the right path to get\nstarted down, IMHO.\n\nThat said, the general idea of a self-tuning database system has\nmerit, IMHO. For example, this paper proposes a histogram data\nstructure that can be updated fairly cheaply based on data gathered\nfrom query execution:\n\n http://citeseer.nj.nec.com/255752.html\n\nA bunch of industry players (IBM, Microsoft, etc.) are putting some\nwork into this area (IBM calls it \"autonomic computing\", for\nexample). It might be an interesting area to look at in the future...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "21 Oct 2002 03:34:56 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Self-generating statistics?" }, { "msg_contents": "On Mon, Oct 21, 2002 at 08:23:33AM +0200, Mario Weilguni wrote:\n> Something that might be worth considering:\n> \n> Many of the performance problems on pgsql-general seem to be\n> related to the fact that no analyze is performed after the creation\n> on the tables, so maybe this might be an option to fix that (in\n> future releases): when a table has no statistics at all, and the\n> first seq-scan on the table is\n\nIt's never the case that a table has no statistics at all. It has\ndefault ones. Maybe they're right; it's hard to know.\n\nSomeone has posted on gborg an anto-vacuum daemon that might be of\nuse in this situation.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 21 Oct 2002 09:00:25 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Self-generating statistics?" } ]
[ { "msg_contents": "Hi:\n\n Are the \"cost\" variables (e.g.\nrandom_page_cost,cpu_tuple_cost,cpu_index_tuple_cost)\nin postgresql.conf optimal for a particular set of\nplatform / hardware requirements? (i.e. the configs\nworks best for let say if you have PIII computer w/\nIDE as storage).\n\n I'm asking this since a lot of softwares' configs\nare defaulted to a \"conservative\" settings where\nallowances are given for people who have older/slower\nCPUs(w/ not so large amount of memory).\n\n\n Thank you in advance.\n\nludwig.\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Mon, 21 Oct 2002 00:04:58 -0700 (PDT)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": true, "msg_subject": "Default cost variables in postgresql.conf" }, { "msg_contents": "On Mon, Oct 21, 2002 at 12:04:58AM -0700, Ludwig Lim wrote:\n> Hi:\n> \n> Are the \"cost\" variables (e.g.\n> random_page_cost,cpu_tuple_cost,cpu_index_tuple_cost)\n> in postgresql.conf optimal for a particular set of\n> platform / hardware requirements? (i.e. the configs\n\nNot exactly. They're best guesses. If you check the admin guide,\nyou'll see that there's a note about these which says that there is\nnot a well-defined method for calculating these things, so you are\nencouraged to experiment and share your findings. They _are_ known\nto be conservative defaults, like everything else in the system.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 21 Oct 2002 10:16:21 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default cost variables in postgresql.conf" } ]
[ { "msg_contents": "Hi:\n\n I was testing a database when notice that it does not\nused the new index I created. So after a couple of\nVACUUM ANALYZE it tried the following test queries.\n\n**** TEST CASE #1 ***********\nloyalty=# set enable_seqscan=off;\nSET VARIABLE\nloyalty=# explain analyze select count(*) from points\nwhere branch_cd=1 ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=119123.54..119123.54 rows=1 width=0)\n(actual time=811.08..811.0\n8 rows=1 loops=1)\n -> Index Scan using idx_monthly_branch on points \n(cost=0.00..1187\n65.86 rows=143073 width=0) (actual time=0.19..689.75\nrows=136790 loops=1)\nTotal runtime: 811.17 msec\n\n***** TEST CASE #2 *********\nloyalty=# set enable_seqscan=on;\nSET VARIABLE\nloyalty=# explain analyze select count(*) from points\nwhere branch_cd=1 ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=62752.34..62752.34 rows=1 width=0)\n(actual time=3593.93..3593.9\n3 rows=1 loops=1)\n -> Seq Scan on points (cost=0.00..62681.70\nrows=28254 width=0) (a\nctual time=0.33..3471.54 rows=136790 loops=1)\nTotal runtime: 3594.01 msec\n\n\n*** TEST CASE #3 (Sequential scan turned off) ******\nloyalty=# explain select * from points where\nbranch_cd=5;\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_monthly_branch on points \n(cost=0.00..49765.12 r\nows=16142 width=55)\n\n \n\n I am wondering why in test case #2 it did not use\nan index scan, where as in case #3 it did. The number\nof rows in test #2 and #3 are just a small subset of\ntable \"points\". \n\n The following are the number of elements in the\ntable:\n branch_cd = 1 ---> 136,970\n branch_cd = 5 ---> 39,385\n count(*) ---> 2,570,173\n\n Its rather strange why \"SELECT COUNT(*)...WHERE\nbranch_cd=1\" uses sequential scan even though it just\ncomprises 5.3% of whole table...\n \n I'ts also strange because of the ff: (Remember test\ncase 1 and 2 are the same query)\n\ntest 1 --> seq_scan=off --> 811.17 msec \ntest 2 --> seq_scan=on --> 3594.01 msec \n\n Test #1 have 400% improvement over Test #2, yet the\nquery plan for test #2 is the default.\n\n Are there way to let the planner improve the choice\nin using an index or not? BTW the \"cost\" variables\nare set to the default for the test.\n\n \n Thank you in advance.\n\nludwig.\n\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Tue, 22 Oct 2002 04:47:38 -0700 (PDT)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": true, "msg_subject": "Selective usage of index in planner/optimizer (Too conservative?)" }, { "msg_contents": "Ludwig Lim <[email protected]> writes:\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=119123.54..119123.54 rows=1 width=0)\n> (actual time=811.08..811.0\n> 8 rows=1 loops=1)\n> -> Index Scan using idx_monthly_branch on points \n> (cost=0.00..1187\n> 65.86 rows=143073 width=0) (actual time=0.19..689.75\n> rows=136790 loops=1)\n> Total runtime: 811.17 msec\n\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=62752.34..62752.34 rows=1 width=0)\n> (actual time=3593.93..3593.9\n> 3 rows=1 loops=1)\n> -> Seq Scan on points (cost=0.00..62681.70\n> rows=28254 width=0) (a\n> ctual time=0.33..3471.54 rows=136790 loops=1)\n> Total runtime: 3594.01 msec\n\nSomething fishy about this --- why is the estimated number of rows\ndifferent in the two cases (143073 vs 28254)? Did you redo VACUUM\nand/or ANALYZE in between?\n\n> I am wondering why in test case #2 it did not use\n> an index scan, where as in case #3 it did.\n\nProbably because it knows \"branch_cd=5\" is more selective than\n\"branch_cd=1\". It would be useful to see the pg_stats entry for\nbranch_cd.\n\n> Its rather strange why \"SELECT COUNT(*)...WHERE\n> branch_cd=1\" uses sequential scan even though it just\n> comprises 5.3% of whole table...\n\nNo, what's strange is that it's faster to use an indexscan for that.\nThe table must be very nearly in order by branch_cd; have you clustered\nit recently?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Oct 2002 10:24:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selective usage of index in planner/optimizer (Too conservative?)" }, { "msg_contents": "\n--- Tom Lane <[email protected]> wrote:\n> \n> Something fishy about this --- why is the estimated\n> number of rows\n> different in the two cases (143073 vs 28254)? Did\n> you redo VACUUM\n> and/or ANALYZE in between?\n\n I neither VACUUMed nor ANALYZEd between the 2\ncases.\n> \n> > I am wondering why in test case #2 it did not\n> use\n> > an index scan, where as in case #3 it did.\n> \n> Probably because it knows \"branch_cd=5\" is more\n> selective than\n> \"branch_cd=1\". It would be useful to see the\n> pg_stats entry for\n> branch_cd.\n\n Should I try altering the statistics? I tried\n ANALYZE points(branch_cd); \n but it still gave me the same results.\n\n> > Its rather strange why \"SELECT COUNT(*)...WHERE\n> > branch_cd=1\" uses sequential scan even though it\n> just\n> > comprises 5.3% of whole table...\n\n What I mean is the table is rather large. (2\nmillion rows) and I thought the planner would\nautomatically used an index to retrieve a small subset\n(based on the percentage) of the large table.\n\n> No, what's strange is that it's faster to use an\n> indexscan for that.\n> The table must be very nearly in order by branch_cd;\n> have you clustered\n> it recently?\n\n I never clustered the table. \n\n But prior to testing I dropped an index and create\na new one. Does dropping and creating index \"confuse\"\nthe planner even after a VACUUM ANALYZE? \n\n I seem to notice this trend everytime I add a new\nindex to the table. It would slow down and the\nperformance would gradually improve in a day or two.\n\n Should I try changing \"cost\" variables? I'm using\nPentium IV, with SCSI [RAID 5].\n\nregards,\n\nludwig.\n\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Tue, 22 Oct 2002 18:48:04 -0700 (PDT)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Selective usage of index in planner/optimizer (Too conservative?)" } ]
[ { "msg_contents": "Hi\nI'd like to split queries into views, but I can't join them - planner \nsearch all of records instead of using index. It works very slow.\n\nHere is example:\n1) create table1(\n id1\tinteger primary key,\n ...fields...\n );\ntable1 has thousands rows >40000.\n\n2) create index ind_pkey on table1(id1);\n\n3) create view some_view as select\n id1,...fields...\n from table1\n join ...(10 joins);\n\n4) create view another_view as select\n id1,...fields...\n from table1\n join ... (5 joins)\n4) Now here is the problem:\n explain select * from some_view where id1=1234;\n result: 100\n\n explain select * from another_view where id1=1234;\n result: 80\n\n explain select * from some_view v1, another_view v2\n where v1.id1=1234 and v2.id1=1234\n result: 210\nExecution plan looks like planner finds 1 record from v1, so cost of \nsearching v1 is about 100. After this planner finds 1 record from v2 \n(cost 80) and it's like I want to have.\n\n explain select * from some_view v1 join another_view v2 using(id1)\n where v1.id1=1234;\n result: 10000 (!)\n\n explain select * from some_view v1 join some_view v2 using(id1)\n where v1.id1=1234;\n result: 10000 (!)\n Even joining the same view doesn't work well.\n\nExecution plan looks like planner finds 1 record from v1, so cost of \nsearching v1 is about 100. After this planner search all of records from \nv2 (40000 records, cost 9000) and then performs join with v1.\n\nI know that I can make only single view without joining views, but it \nmakes me a big mess.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Wed, 23 Oct 2002 09:53:10 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": true, "msg_subject": "joining views" }, { "msg_contents": "Tomasz Myrta <[email protected]> writes:\n> I'd like to split queries into views, but I can't join them - planner \n> search all of records instead of using index. It works very slow.\n\nI think this is the same issue that Stephan identified in his response\nto your other posting (\"sub-select with aggregate\"). When you write\n\tFROM x join y using (col) WHERE x.col = const\nthe WHERE-restriction is only applied to x. I'm afraid you'll need\nto write\n\tFROM x join y using (col) WHERE x.col = const AND y.col = const\nIdeally you should be able to write just\n\tFROM x join y using (col) WHERE col = const\nbut I think that will be taken the same as \"x.col = const\" :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Oct 2002 10:31:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: joining views " }, { "msg_contents": "Uďż˝ytkownik Tom Lane napisaďż˝:\n\n> I think this is the same issue that Stephan identified in his response\n> to your other posting (\"sub-select with aggregate\"). When you write\n> \tFROM x join y using (col) WHERE x.col = const\n> the WHERE-restriction is only applied to x. I'm afraid you'll need\n> to write\n> \tFROM x join y using (col) WHERE x.col = const AND y.col = const\n> Ideally you should be able to write just\n> \tFROM x join y using (col) WHERE col = const\n> but I think that will be taken the same as \"x.col = const\" :-(\n\nI am sad, but you are right. Using views this way will look strange:\n\ncreate view v3 as select\n v1.id as id1,\n v2.id as id2,\n ...\nfrom some_view v1, another_view v2;\n\nselect * from v3 where\nid1=1234 and id2=1234;\n\nIs it possible to make it look better?\n\nAnd how to pass param=const to subquery (\"sub-select with aggregate\") if \nI want to create view with this query?\nTomasz Myrta\n\n", "msg_date": "Wed, 23 Oct 2002 17:02:34 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: joining views" } ]
[ { "msg_contents": "Hi all,\n\nI have a basic doubt about indexes... in the next example:\n\n-- ====================================================\nDROP TABLE ctest;\nCREATE TABLE ctest\n ( cusid numeric(5) PRIMARY KEY NOT NULL, -- Customer ID.\n namec varchar(10) NOT NULL, -- Customer Name.\n surnc varchar(20), -- Customer Surname.\n cashc numeric(10,4) -- Customer Cash.\n );\nCREATE INDEX ctest_cashc ON ctest (cashc);\n\nINSERT INTO ctest VALUES (10,'Ten Custom','S.Ten Customer',1000);\nINSERT INTO ctest VALUES (5 ,'Five Custo','S.Five Customer',500);\nINSERT INTO ctest VALUES (8, 'Eigth Cust','S.Eigth Customer',800);\nINSERT INTO ctest VALUES (90,'Nine Custo','S.Nine Customer',9000);\nINSERT INTO ctest VALUES (70,'Seven Cust','S.Seven Customer',7000);\n\n-- Next two SELECT will execute using index Scan on ctest_pkey\nexplain SELECT * from ctest WHERE cusid between 5 AND 10 AND cashc < 1000;\nexplain SELECT * from ctest WHERE cusid =5 AND cashc = 1000;\n\nCREATE INDEX ctest_othec ON ctest (cusid, cashc);\n\n-- Next two SELECT will execute using Seq Scan.\nexplain SELECT * from ctest WHERE cusid between 5 AND 10 AND cashc < 1000;\nexplain SELECT * from ctest WHERE cusid =5 AND cashc = 1000;\n\n-- ====================================================\n\nSELECTs executed before CREATE INDEX ctest_othec... are using index scan on PRIMARY KEY, but after the CREATE INDEX all SELECTs are using seq scan.\n\nSeq Scan has lower cost than index scan (I think because there are few rows in table).\n\nBut if we have an index with the two colums I am using in the WHERE clause, why is the planner using seq scan ? (Or perhaps it is because too few rows in the table ?)....\n\nThanks..\n\n", "msg_date": "Sat, 26 Oct 2002 12:27:28 +0200", "msg_from": "Terry Yapt <[email protected]>", "msg_from_op": true, "msg_subject": "Basic question about indexes/explain" }, { "msg_contents": "Am Samstag, 26. Oktober 2002 12:27 schrieb Terry Yapt:\n> Hi all,\n>\nsnip\n> I have a basic doubt about indexes... in the next example:\n> But if we have an index with the two colums I am using in the WHERE clause,\n> why is the planner using seq scan ? (Or perhaps it is because too few rows\n> in the table ?)....\n\nFirst of all, you did not analyze your table (at least you did not mention you did). And an index is never a win for such a small table. I think the planner is fine here to select a seq scan, because your whole table is only 1 database page, so it would be no win to check the index here.\n\nEverything is explained in the manual, check http://developer.postgresql.org/docs/postgres/indexes.html\n\nregards,\n\tmario weilguni\n", "msg_date": "Sat, 26 Oct 2002 13:40:06 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic question about indexes/explain" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> Everything is explained in the manual, check\n> http://developer.postgresql.org/docs/postgres/indexes.html\n\nIn particular note the comments at the bottom of\nhttp://developer.postgresql.org/docs/postgres/performance-tips.html:\n\n\"It is worth noting that EXPLAIN results should not be extrapolated to\nsituations other than the one you are actually testing; for example,\nresults on a toy-sized table can't be assumed to apply to large\ntables. The planner's cost estimates are not linear and so it may well\nchoose a different plan for a larger or smaller table. An extreme\nexample is that on a table that only occupies one disk page, you'll\nnearly always get a sequential scan plan whether indexes are available\nor not. The planner realizes that it's going to take one disk page read\nto process the table in any case, so there's no value in expending\nadditional page reads to look at an index.\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Oct 2002 10:24:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Basic question about indexes/explain " } ]
[ { "msg_contents": "\nAfter checking some docs on performance tuning, I'm trying to\nfollow Bruce Momjian (sp??) advice to set the shared_buffers\nat 25% of the amount of physical memory (1GB in our server)\nand 4% for the sort_mem.\n\nWhen I try that, I get an error message when starting postgres,\ncomplaining that the amount of shared memory requested exceeds\nthe maximum allowed by the kernel (they talk about increasing\nthe kernel parameter SHMMAX -- does this mean that I have to\nrecompile the kernel? Or is it just a \"runtime\" configuration\nparameter that I set and on the next reboot will be taken?)\n\nTo double check if I understood correctly:\n\nI have 1GB, so I want 256MB as shared buffers memory; each\nshared buffer is 8kbytes, so I take 256M / 8k, which is 32k --\nso, I uncomment the line shared_buffers in the configuration\nfile, and put:\n\nshared_buffers = 32000\n\nI don't touch anything else (max_connections keeps its default\nvalue, but as I understand, that has nothing to do anyway...\nright?)\n\nSo, what should I do?\n\nApologies if this is an FAQ -- I tried searching the archives,\nbut I get a 404 - Not Found error when following the link to\nthe archives for this list :-(\n\nThanks in advance for any comments / advice !\n\nCarlos\n--\n\n\n", "msg_date": "Sat, 26 Oct 2002 18:36:05 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Setting shared buffers" }, { "msg_contents": "Carlos,\n\n> After checking some docs on performance tuning, I'm trying to\n> follow Bruce Momjian (sp??) advice to set the shared_buffers\n> at 25% of the amount of physical memory (1GB in our server)\n> and 4% for the sort_mem.\n\nI tend to set my shared_buffers somewhat higher, but that's a good\nplace to start. Be cautious about sort_mem on a server with a lot of\nusers; sort_mem is not shared, so make sure that you have enough that\nyour server could handle 1-2 sorts per concurrent user without running\nout of RAM.\n\n> When I try that, I get an error message when starting postgres,\n> complaining that the amount of shared memory requested exceeds\n> the maximum allowed by the kernel (they talk about increasing\n> the kernel parameter SHMMAX -- does this mean that I have to\n> recompile the kernel? Or is it just a \"runtime\" configuration\n> parameter that I set and on the next reboot will be taken?)\n\nIt's easy, on Linux don't even have to reboot. Other OS's are harder.\n See this very helpful page:\nhttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/kernel-resources.html#SYSVIPC\n\nIn fact, I tend to up my SHMMAX and SHMMALL and shared_buffers at night\non some databases, when they are doing automatic updates, and adjust\nthem back down during the day, when I want to prevent heavy user loads\nfrom using up all system RAM.\n\n> I have 1GB, so I want 256MB as shared buffers memory; each\n> shared buffer is 8kbytes, so I take 256M / 8k, which is 32k --\n> so, I uncomment the line shared_buffers in the configuration\n> file, and put:\n\nSee the calculations on the page link above. They are more specific\nthan that, and I have found the numbers there to be good estimates,\nmaybe only 10-20% high.\n\n-Josh Berkus\n", "msg_date": "Sat, 26 Oct 2002 17:14:35 -0700", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Setting shared buffers" }, { "msg_contents": "\nThanks John! This is very helpful...\n\nJust one detail I'd like to double check:\n \n\n>It's easy, on Linux don't even have to reboot. Other OS's are harder.\n> See this very helpful page:\n>http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/kernel-resources.html#SYSVIPC\n>\n>In fact, I tend to up my SHMMAX and SHMMALL and shared_buffers [...]\n>\n\nAccording to that document, I should put the same value for the\nSHMMAX and SHMMALL -- however, when I do:\n\n cat /proc/sys/kernel/shmmax\n cat /proc/sys/kernel/shmmall\n\non my Linux system (RedHat 7.3, soon to upgrade to 8.0), I\nget different values, shmmall being shmmax divided by 16\n\nIs that normal? What should I do? Should I follow the exact\nsame instructions from that document and set both to the\nexact same value?\n\nAre the default values set that way (i.e., different values)\nfor some strange reason, or is it that on the 2.4 kernel\nthe shmmall is indicated in blocks of 16 bytes or something\nlike that?\n\nThanks!\n\nCarlos\n--\n\n\n", "msg_date": "Mon, 28 Oct 2002 17:07:42 -0500", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Setting shared buffers" } ]
[ { "msg_contents": "Hello Newsgroup\n\nI have trouble with postgres 7.2.3. This system works fine, but last week\nevery postmaster process exhausted my system to 100%. What can i do ? When I\ndump all databases it has 16MB. I start my system with the following\ncommand:\n\npostmaster -i -c shared_buffers=1024 -c sort_mem=16384 -c\neffective_cache_size=2048 -c max_connections=128 -c fsync=false -c\nenable-seqscan=false -c enable_indexscan=false -c enable_tidscan=false -c\nenable_sort=false -c enable_nestloop=false -c enable_hashjoin=false -c\nenable-mergejoin=false -c show_parser_stats=false -c\nshow_planner_stats=false -c show_executor_stats=false -c\nshow_query_stats=false -c random_page_cost=0.99 -o -F\n\nCan someone help me ?\n\n--\nLars Maschke\n---\nEs gibt Tage, da verliert man und es gibt Tage, da gewinnen die anderen.\n\n\n", "msg_date": "Mon, 28 Oct 2002 12:00:52 +0100", "msg_from": "\"Lars Maschke\" <[email protected]>", "msg_from_op": true, "msg_subject": "[pgsql-performance] Performance Problems" }, { "msg_contents": "On Mon, Oct 28, 2002 at 12:00:52PM +0100, Lars Maschke wrote:\n> Can someone help me ?\n\nMaybe. You need to tell us what \"exhausted your system\" means. Did\nit crash? Were you swapping? Were your CPUs pegged?\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 29 Oct 2002 08:38:24 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Performance Problems" } ]
[ { "msg_contents": "Hello,\n\nI have a question regarding clusters. What with all\nthe hoop-la about Oracle RAC, the question of\nclustering PostgreSQL has come up at work.\n\nNow, I know that only one system can actually update\nthe database, and in a active/passive failover\nsituation that is ok. But! If I have lots and lots of\nREADS from the DB, can I cluster many low end systems\ntogether behind an SLB? Assume that all systems have a\nqlogic card, and are attached to a SAN, and that the\nSAN holds the data. Can PostgreSQL be configured to\nread from the SAN? Does each system have to initialize\nthe DB?\n\nI think this could greatly improve the perfomance from\na application appearance, but, so far I have only seen\ndocumentation about Oracle RAC, DB2, and MySQL using\nsome sort of cluster software, be it, kimberlite, Red\nHat Cluster Manager, or Vertias Cluster Server.\n\nPostgreSQL seems to be our DB of choice, and I just\nwant to have a scalable solution via clustering for\nit. No replication. Thanks for any thoughts!\n\n-James\n\n\n=====\nJames Kelty\n11742 NW Valley Vista Rd.\nHillsboro, OR 97124\nCell: 541.621.5832\[email protected]\n\n__________________________________________________\nDo you Yahoo!?\nY! Web Hosting - Let the expert host your web site\nhttp://webhosting.yahoo.com/\n", "msg_date": "Mon, 28 Oct 2002 12:47:52 -0800 (PST)", "msg_from": "James Kelty <[email protected]>", "msg_from_op": true, "msg_subject": "Clusters" }, { "msg_contents": "On Mon, Oct 28, 2002 at 12:47:52PM -0800, James Kelty wrote:\n> together behind an SLB? Assume that all systems have a\n> qlogic card, and are attached to a SAN, and that the\n> SAN holds the data. Can PostgreSQL be configured to\n> read from the SAN? Does each system have to initialize\n> the DB?\n\nYou can't do this safely. PostgreSQL wants to control its disk. \nSomeone has said on the (?) -general list that he has modified the\nPostgreSQL code to do this, but it makes me nervous.\n\nThe Postgres-R project is trying to do something similar, but it's\nsome way from production quality.\n\n> it. No replication. Thanks for any thoughts!\n\nI sort of wonder why \"no replication\" is a requirement. If you want\nlots of cheap, read-only machines, why not do it with replication? \nYou can buy a _lot_ of x86 boxes with Promise IDE RAID and big, fast\nIDE drives for the price of ORAC.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 28 Oct 2002 15:59:27 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clusters" }, { "msg_contents": "> > together behind an SLB? Assume that all systems have a qlogic\n> > card, and are attached to a SAN, and that the SAN holds the\n> > data. Can PostgreSQL be configured to read from the SAN? Does each\n> > system have to initialize the DB?\n> \n> You can't do this safely. PostgreSQL wants to control its disk.\n> Someone has said on the (?) -general list that he has modified the\n> PostgreSQL code to do this, but it makes me nervous.\n\nDidn't Tom say that it was possible if you had different WAL logs for\neach instance? ie, just share the data directory, but everything else\nhas to be on a per instance basis. Check the archives, someone just\nasked about this a week ago or so. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 28 Oct 2002 13:18:46 -0800", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clusters" }, { "msg_contents": "On Mon, Oct 28, 2002 at 01:18:46PM -0800, Sean Chittenden wrote:\n> \n> Didn't Tom say that it was possible if you had different WAL logs for\n> each instance? ie, just share the data directory, but everything else\n> has to be on a per instance basis. Check the archives, someone just\n> asked about this a week ago or so. -sc\n\nAs Sean says, check the archives. But I think the problem is bigger\nthan just the WAL. For instance, the pidfile is in the data\ndirectory, so each system is going to try to overwrite that. Plus,\nthere's no read-only mode for Postgres, so if one of the systems\nwrites where it shouldn't, you'll blow everything away. I can\nappreciate that some people like to play at the bare metal this way,\nbut it gives me ulcers ;-)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 28 Oct 2002 16:32:16 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clusters" }, { "msg_contents": "I've just been very unhappy with the ease of use for\nPostgres replication. Pgreplicator was a huge pain\nthat worked less often than not, rserv was really\nunmanageable. I haven't had a chance to look at\ndbmirror yet, though. Plus the replication would start\nto chew up network bandwidth at some point. Of course,\nso would reading from the SAN. But, then there is the\nissue of system failure. If the system has to be\nre-imaged, then I'd have to take a snap shot of the\nmaster, and re-apply it to the new system. It just\nseems more manageable if I can plug in a new Postgres\n'instance' to the SAN, and have it up to date the\nminute it starts. I know that postgres doesn't have a\n'read-only' mode, but it does have the GRANT option.\nSo, access to the DB _can_ be controlled that way at\nleast. \n\nAnyway, thanks for all the thoughts and info. If\nanyone knows of some other replication service besides\nthe two listed above, great! Let me know!\n\nLemme just say, that the feature set of Postgres, when\ntalking strictly database, is AWESOME. Really easy to\nwork with, and around, but, in the HA world, it seems\na little difficult to work with.\n\nThanks again, Guys!\n\n-James\n\n--- Andrew Sullivan <[email protected]> wrote:\n> On Mon, Oct 28, 2002 at 12:47:52PM -0800, James\n> Kelty wrote:\n> > together behind an SLB? Assume that all systems\n> have a\n> > qlogic card, and are attached to a SAN, and that\n> the\n> > SAN holds the data. Can PostgreSQL be configured\n> to\n> > read from the SAN? Does each system have to\n> initialize\n> > the DB?\n> \n> You can't do this safely. PostgreSQL wants to\n> control its disk. \n> Someone has said on the (?) -general list that he\n> has modified the\n> PostgreSQL code to do this, but it makes me nervous.\n> \n> The Postgres-R project is trying to do something\n> similar, but it's\n> some way from production quality.\n> \n> > it. No replication. Thanks for any thoughts!\n> \n> I sort of wonder why \"no replication\" is a\n> requirement. If you want\n> lots of cheap, read-only machines, why not do it\n> with replication? \n> You can buy a _lot_ of x86 boxes with Promise IDE\n> RAID and big, fast\n> IDE drives for the price of ORAC.\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141\n> Yonge Street\n> Liberty RMS Toronto,\n> Ontario Canada\n> <[email protected]> \n> M2P 2A8\n> +1 416 646\n> 3304 x110\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n=====\nJames Kelty\n11742 NW Valley Vista Rd.\nHillsboro, OR 97124\nCell: 541.621.5832\[email protected]\n\n__________________________________________________\nDo you Yahoo!?\nY! Web Hosting - Let the expert host your web site\nhttp://webhosting.yahoo.com/\n", "msg_date": "Mon, 28 Oct 2002 16:00:38 -0800 (PST)", "msg_from": "James Kelty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Clusters" }, { "msg_contents": "On Mon, Oct 28, 2002 at 04:00:38PM -0800, James Kelty wrote:\n\n> that worked less often than not, rserv was really\n> unmanageable. \n\nWe use PostgreSQL, Inc's eRserver, which is a commercial version of\nthe code in contrib/, and I can say that it is not unmanageable, but\nit is some work at first. The commercial version is an improvement\non the contrib/ code, though. For us, it's worth it; but you have to\ndecide that for yourself.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 28 Oct 2002 22:27:36 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clusters" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n> together behind an SLB? Assume that all systems have a qlogic\n> card, and are attached to a SAN, and that the SAN holds the\n> data. Can PostgreSQL be configured to read from the SAN? Does each\n> system have to initialize the DB?\n>> \n>> You can't do this safely. PostgreSQL wants to control its disk.\n>> Someone has said on the (?) -general list that he has modified the\n>> PostgreSQL code to do this, but it makes me nervous.\n\n> Didn't Tom say that it was possible if you had different WAL logs for\n> each instance?\n\nI said no such thing. I said it will not work, period.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 29 Oct 2002 08:43:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clusters " } ]
[ { "msg_contents": "I'm looking for some advice or benchmarks comparing low end systems for a postgres installation. \n\nCurrently, I've got postgres running on the same system as the app server accessing a single fast IDE drive. The database is on the order of 1 gig, with two main tables accounting for 98% of the data. Between the app servers and the database, I'm pretty sure that neither of the main tables are cached in memory for any significant time. I'm guessing that this is sub optimal. (The data size is 1 gig now, but I will be adding more 1 gig databases to this system in the near future) I'm planning to split this into an app server and database server.\n\nIn an ideal world, I'd throw a lot of 15k scsi/raid0+1 at this. But I don't have an ideal world budget. I've got more of an ide world budget, if that. (~1k)\n\nI know the first order of business is to ignore the hardware and make sure that I've got all of the table scans found and turned into indexes. I'm still working on that. Are there any tools that save queries and the plans, then report on the ones that are the biggest performance drags?\n\nBut since I do software, it's obviously a hardware problem. ;>\n\nMy hardware options:\n\nProcessor:\n\n* My low cost option is to repurpose an under used p3 with onboard IDE raid and pc133 memory. The high cost option is to get a new mid range Athalon with 266/ddr memory. Will maxing out the memory mean that whatever I don't use for client connections will be used for caching the drive system? (most likely, I will be running debian woody with a 2.4 series kernel)\n\nDrives:\n\n* The cost difference between IDE and SCSI is roughly a factor of 2-4x. (100 gig 7200 rpm IDE can be had for a little over $100, 10k 36 gig SCSI is about $200. Am I better off throwing twice as many (4) IDE disks at the system? Does it change if I can put each IDE drive on its own channel?\n\nDrive Layout:\n\n* What Drive layout?\n\nRaid?\n0 gives better latency if the controller reads from whichever gets the data first. It's unclear if IDE or software raid actually does this though. \n1 Gives better throughput, at a cost to latency. \n5 Like 1 but with redundancy. It's unclear if I'll be able to do this without hardware SCSI raid. \n\nNon Raid?\nI've read about seperating table spaces on different drives, so that indexes and data can be written at the same time. This advice appears to be tailored to the complexity of oracle. The ideal configuration according to this info appears to be multiple drives, all mirrored individually. \n\nDoes the write ahead logging of PG mean that no matter what indexes and data are changed, that there will be one sync to disk? Does this reduce the penalty of indexes? WAL seems to mean that to get performance out of a drive array, I'd want to use the fastest (latency/throughput) logical single image I could get, not a collection of mirrored drives.\n\nI'd appreciate any insight.\n\neric\n\n\n", "msg_date": "Mon, 28 Oct 2002 13:56:14 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Low Budget Performance" }, { "msg_contents": "On 28 Oct 2002 at 13:56, eric soroos wrote:\n\n> Currently, I've got postgres running on the same system as the app server accessing a single fast IDE drive. The database is on the order of 1 gig, with two main tables accounting for 98% of the data. Between the app servers and the database, I'm pretty sure that neither of the main tables are \ncached in memory for any significant time. I'm guessing that this is sub optimal. (The data size is 1 gig now, but I will be adding more 1 gig databases to this system in the near future) I'm planning to split this into an app server and database server.\n> \n> In an ideal world, I'd throw a lot of 15k scsi/raid0+1 at this. But I don't have an ideal world budget. I've got more of an ide world budget, if that. (~1k)\n> \n> I know the first order of business is to ignore the hardware and make sure that I've got all of the table scans found and turned into indexes. I'm still working on that. Are there any tools that save queries and the plans, then report on the ones that are the biggest performance drags?\n> \n> But since I do software, it's obviously a hardware problem. ;>\n> \n> My hardware options:\n\nI would say throw a lot of RAM no matter what type. Even PC133 is going to be \nfaster than any disk you can buy anytimes. I would say 2Gig is a nice place to \nstart.\n\nA gig is not much of a database but a lot depends upon what do you do with the \ndata. Obviously 50 clients doing sequential scan with rows ordered in random \nfashion would chew any box,..;-)\n\nProcessor does not matter much. But I would advice to split app server and \ndatabase server ASAP.\n\nWell, IDE RAID looks like nice optio to me, but before finalising RAID config., \nI would advice to test performance and scalability with separate database \nserver and couple of Gigs of RAM. Because if this configuration is sufficient \nfor your need, probably you can choose a conservatice RAID config that would \nenhance availability rather than getting every ounce of performance out of it. \nAs far as possible, don't compramise with storage availability.\n\n> Does the write ahead logging of PG mean that no matter what indexes and data are changed, that there will be one sync to disk? Does this reduce the penalty of indexes? WAL seems to mean that to get performance out of a drive array, I'd want to use the fastest (latency/throughput) logical \nsingle image I could get, not a collection of mirrored drives.\n\nI guess RAID will take care of lot of these issues. Besides if you use volume \nmanager you can add partitions from different disks, effectively splitting the \nIO. Of course, you can shutdown the database and symlink things to another \ndrive, but that's hack and nothing else. Don't do it as far as possible..\n\nHTH\n\nBye\n Shridhar\n\n--\nYou're dead, Jim.\t\t-- McCoy, \"Amok Time\", stardate 3372.7\n\n", "msg_date": "Tue, 29 Oct 2002 12:00:45 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "Am Montag, 28. Oktober 2002 22:56 schrieb eric soroos:\n> Raid?\n> 0 gives better latency if the controller reads from whichever gets the data\n> first. It's unclear if IDE or software raid actually does this though. 1\n> Gives better throughput, at a cost to latency.\n> 5 Like 1 but with redundancy. It's unclear if I'll be able to do this\n> without hardware SCSI raid.\n\nJust for the raid part, we've very good expiriences with Raid 10. Performs well and has mirroring. Avoid Raid 5 if possible, write performance will suffer greatly.\n\nRegards,\n\tMario Weilguni\n", "msg_date": "Tue, 29 Oct 2002 08:07:50 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "Eric,\n\n> > Currently, I've got postgres running on the same system as the app\n> server accessing a single fast IDE drive. The database is on the\n> order of 1 gig, with two main tables accounting for 98% of the data.\n> Between the app servers and the database, I'm pretty sure that\n> neither of the main tables are \n> cached in memory for any significant time. I'm guessing that this is\n> sub optimal. (The data size is 1 gig now, but I will be adding more 1\n> gig databases to this system in the near future) I'm planning to\n> split this into an app server and database server.\n\nOne gig is a large database for a single IDE drive -- especially with\nmultiple client connections.\n\n> > In an ideal world, I'd throw a lot of 15k scsi/raid0+1 at this.\n> But I don't have an ideal world budget. I've got more of an ide\n> world budget, if that. (~1k)\n\nWell, no matter how many performance tricks you add in, the speed will\nbe limited by the hardware. Make sure that your client/employer knows\nthat *before* they complain about the speed.\n\n> > I know the first order of business is to ignore the hardware and\n> make sure that I've got all of the table scans found and turned into\n> indexes. I'm still working on that. Are there any tools that save\n> queries and the plans, then report on the ones that are the biggest\n> performance drags?\n\nNot exactly. If you enable Postgres 7.2 STATISTICS, you can get a lot\nof information about which indexes are being used, which are not, and\nwhich tables are having a lot of table scans. A tool like you\ndescribe would be really, really useful -- in fact, if anyone wrote\none, I'm sure you could sell it for $$$$.\n\n> > But since I do software, it's obviously a hardware problem. ;>\n\n<grin>\n\nActually, your best option for the hardware is to test what portion of\nthe hardware is bottlenecking your performance, and address that. But\nfirst:\n\n> Well, IDE RAID looks like nice optio to me, but before finalising\n> RAID config., \n> I would advice to test performance and scalability with separate\n> database \n> server and couple of Gigs of RAM. \n\nI'm not convinced that current IDE RAID actually improves database disk\nthroughput -- there's a lot of overhead in the one controller I tried\n(Promise). Does anyone have some statistics they can throw at me? \n\nA cheaper and easier method, involving 3-4 disks:\n\nChannel 1, Disk 1: Operating System, Swap, and PostgreSQL log\nChannel 1, Disk 2: WAL Files\nChannel 2, Disk 1: Database\nChannel 2, Disk 2 (optional): 2nd database data\n\n*however*, if you have multiple databases being simulteaneously\naccessesed, you will want to experiment with shuffling around the\ndatabases and WAL files to put them on different disks. The principle\nis to divide the disk tasks that are simultaenous ammonng as many disks\nas possible; thus the WAL files always do better on a different disk\nand channel than the database.\n\n> > Does the write ahead logging of PG mean that no matter what indexes\n> and data are changed, that there will be one sync to disk? Does this\n> reduce the penalty of indexes? \n\nIn a word: No. Depending on the size of the update, there may be\nmultiple synchs. And indexes do carry a significant penalty on large\nupdates; just try runninng 10,000 updates to an indexed column as one\ntransaction, and the penalty will be obvious. In fact, for my data\nload procedures, I tend to drop and re-create indexes.\n\n> WAL seems to mean that to get\n> performance out of a drive array, I'd want to use the fastest\n> (latency/throughput) logical \n> single image I could get, not a collection of mirrored drives.\n\nMirrored drives are different than RAID. However, you are correct\nthat the redundancy/fail-over factor in some RAID and Mirroring comes\nat a performance penalty.\n\nBut you need to determine where you are actually losing time.\n Assuming that your tables are correctly indexed, your files are\ndistributed, and your database is VACUUM FULL ANALYZed, and your\npostgresql.conf configured for optimum use of your exisiting memory,\nthen here's what you do (assuming that you use Linux)\n\n1. From a workstation, open 2 terminal windows on the server. In #1,\nrun \"vmstat 3\", in the other \"top\"\n\n2. Have your users pound on the application, trying all of the most\ncomplicated (and slow) operations in the app. More users is better,\nfor this test.\n\n3. Watch Vmstat and Top. What you're looking for is:\n\na) Is the processor at 75% or above? If so, you either need a faster\nprocessor or more efficient queries\n\nb) Is the system using 80-100% of the RAM which you allocated it? If\nso, add more RAM and increase the Postgresql.conf memory variables.\n\nc) Is the system using Swap memory? if so, either add more RAM, or\n*decrease* the postgresql.conf memory variables.\n\nd) Are RAM and Processor at less than 50%, but the Disk I/O reaches a\nmaximum number and stays there for minutes? Then your disk channel is\nflooded, and you cannot improve performance except by either improving\nyour queries so the pull less rows, or adding more/faster disk\ncapacity.\n\nThe above process, while drawn-out, will help you avoid spending a lot\nof money on, for example, RAM that won't make a difference.\n\n-Josh Berkus\n\n\n\n\n", "msg_date": "Tue, 29 Oct 2002 09:00:27 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "Mario,\n \n> Just for the raid part, we've very good expiriences with Raid 10.\n> Performs well and has mirroring. Avoid Raid 5 if possible, write\n> performance will suffer greatly.\n\nOut of curiousity, what is it with RAID 5? I've encountered the poor\nwrite performance too ... any idea why?\n\n-Josh\n", "msg_date": "Tue, 29 Oct 2002 09:10:31 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "On Tue, Oct 29, 2002 at 09:10:31AM -0800, Josh Berkus wrote:\n> \n> Out of curiousity, what is it with RAID 5? I've encountered the poor\n> write performance too ... any idea why?\n\nIt largely depends on the controller and te implementation. It has\nto do with the cost of calculating the checksum. If the\nimplementation of that is inefficient, the writes become inefficient.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 29 Oct 2002 12:31:10 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "Josh,\n\nThanks for the reply. \n\n> One gig is a large database for a single IDE drive -- especially with\n> multiple client connections.\n\nThat's good to know. \n\nIs a scsi system that much better? Looking at prices, scsi is 1/2 the capacity and double the price for the 80 gig 7200rpm ide vs 36 gig 10k rpm scsi. Assuming that I'll never run out of space before running out of performance, I can dedicate 2x the number of ide drives to the problem. \n\n> > Well, IDE RAID looks like nice optio to me, but before finalising\n> > RAID config., \n> > I would advice to test performance and scalability with separate\n> > database \n> > server and couple of Gigs of RAM. \n> \n> I'm not convinced that current IDE RAID actually improves database disk\n> throughput -- there's a lot of overhead in the one controller I tried\n> (Promise). Does anyone have some statistics they can throw at me? \n\nAll of the benchmarks that I've seen show that IDE raid is good for large operations, but for random seek and small data transfers, you don't get anywhere near the expected scaling. \n\n> A cheaper and easier method, involving 3-4 disks:\n> \n> Channel 1, Disk 1: Operating System, Swap, and PostgreSQL log\n> Channel 1, Disk 2: WAL Files\n> Channel 2, Disk 1: Database\n> Channel 2, Disk 2 (optional): 2nd database data\n\nWith IDE, I think I can manage to put each drive on a seperate channel. I've either got one extra controller onboard, or I can add a 4 channel pci card. From what I've read, this is one of the more important factors in IDE performance. \n\n> *however*, if you have multiple databases being simulteaneously\n> accessesed, you will want to experiment with shuffling around the\n> databases and WAL files to put them on different disks. The principle\n> is to divide the disk tasks that are simultaenous ammonng as many disks\n> as possible; thus the WAL files always do better on a different disk\n> and channel than the database.\n\nThat's what I've read about database disk system design. Reduce spindle contention by using lots of drives. (especially in Philip Greenspun's book, but he's talking about 2x7 drives as a minimal configuration and 2x21 as ideal for larger systems. And when licensing is more expensive than that sort of drive system, it's all roundoff error.)\n\nSo, assuming that I have three databases with roughly equal load on them, does it make sense to partition them like:\n\ndisk 0: os/swap/log/backup staging\ndisk 1: WAL 1, DB 2\ndisk 2: WAL 2, DB 3\ndisk 3: WAL 3, DB 1\n\nOr, in a slightly bigger drive system split 2 ways then mirrored.\ndisk 0: os etc\nDisk 1,2: WAL 1, DB 2\nDisk 3,4: WAL 2, DB 1\n \n From an admin point of view, would this be done with alternate locations, symlinks, or multiple concurrent pg processes?\n\n> > > Does the write ahead logging of PG mean that no matter what indexes\n> > and data are changed, that there will be one sync to disk? Does this\n> > reduce the penalty of indexes? \n> \n> In a word: No. Depending on the size of the update, there may be\n> multiple synchs. And indexes do carry a significant penalty on large\n> updates; just try runninng 10,000 updates to an indexed column as one\n> transaction, and the penalty will be obvious. In fact, for my data\n> load procedures, I tend to drop and re-create indexes.\n\nMost of my update procedures are single row updates, with the exception being things that are already background tasks that the user doesn't notice the difference between 10 and 20 sec. So maybe I'm lucky there.\n\n> Mirrored drives are different than RAID. However, you are correct\n> that the redundancy/fail-over factor in some RAID and Mirroring comes\n> at a performance penalty.\n\n From howtos I've seen, there _can_ be a speed boost with mirroring on read using the linux kernel raid 1. Write performance suffers though. \n \n> But you need to determine where you are actually losing time.\n\nThat looks like it will get me started.\n\neric\n\n\n", "msg_date": "Tue, 29 Oct 2002 10:43:33 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "On Tue, Oct 29, 2002 at 10:43:33AM -0800, eric soroos wrote:\n> Is a scsi system that much better? Looking at prices, scsi is 1/2\n> the capacity and double the price for the 80 gig 7200rpm ide vs 36\n> gig 10k rpm scsi. Assuming that I'll never run out of space before\n> running out of performance, I can dedicate 2x the number of ide\n> drives to the problem.\n\nSCSI is dramatically better at using the interface. It is much\nsmarter about, for instance, handling multiple disks at the same\ntime; and it requires less attention from the CPU.\n\nThat said, if you have enough high speed IDE controllers and disks,\nyou'll probably beat an older SCSI system. And you can't beat the\nprice/performance of IDE RAID. We use it for some applications. \n\nNote also that you get better RAID controllers from SCSI vendors,\njust because it's the official high speed offering. The Promise IDE\nRAID is nice, but it sure isn't as fast as the latest SCSI RAID\ncontrollers. (We have also found that there's some overhead in the\nIDE RAID. It was better under FreeBSD than I'm now experiencing\nunder Linux. But that might just be my prejudices showing!)\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 29 Oct 2002 15:13:41 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "On Tue, 2002-10-29 at 11:31, Andrew Sullivan wrote:\n> On Tue, Oct 29, 2002 at 09:10:31AM -0800, Josh Berkus wrote:\n> > \n> > Out of curiousity, what is it with RAID 5? I've encountered the poor\n> > write performance too ... any idea why?\n> \n> It largely depends on the controller and te implementation. It has\n> to do with the cost of calculating the checksum. If the\n> implementation of that is inefficient, the writes become inefficient.\n\nA high-quality smart controller with lots of cache RAM definitely\nnegates the RAID5 performance issues.\n\n(Of course, I'm referring to enterprise-level rack-mounted h/w\nthat costs big bucks...)\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "29 Oct 2002 15:09:41 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]\n> On Behalf Of Ron Johnson\n> Sent: Tuesday, October 29, 2002 3:10 PM\n> To: [email protected]\n> Subject: Re: [pgsql-performance] Low Budget Performance\n>\n>\n> On Tue, 2002-10-29 at 11:31, Andrew Sullivan wrote:\n> > On Tue, Oct 29, 2002 at 09:10:31AM -0800, Josh Berkus\n> > wrote:\n> > >\n> > > Out of curiousity, what is it with RAID 5? I've\n> > > encountered the poor write performance too ... any\n> > > idea why?\n> >\n> > It largely depends on the controller and the\n> > implementation. It has to do with the cost of\n> > calculating the checksum. If the implementation of\n> > that is inefficient, the writes become inefficient.\n>\n> A high-quality smart controller with lots of cache RAM\n> definitely negates the RAID5 performance issues.\n>\n> (Of course, I'm referring to enterprise-level rack-mounted\n> h/w that costs big bucks...)\n\nOnly if you buy it new. EBay has some great deals these days. My company\njust purchased a very nice Quad Xeon w/ 2GB RAM and last year's high-end\nPERC RAID controller for under $5K. The external drive array was a bit more\nexpensive but that is an optional purchase. The 3x9GB SCSI drives that came\nwith the machine should be more than sufficient to run a greater than small\ndatabase server. If you don't want to spend $5K there are Dual Xeon machines\nwith less RAM and not quite so nice RAID controllers that you can get in the\n$2.5K to $3.5K range.\n\nrjsjr\n\n", "msg_date": "Tue, 29 Oct 2002 15:57:29 -0600", "msg_from": "\"Robert J. Sanford, Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance" } ]
[ { "msg_contents": "Hi:\n\tNot being too versed with postgres and beta tests, is there \na place or person who would review the results of the benchmark test for 7.3?\n\[email protected]\n", "msg_date": "Tue, 29 Oct 2002 08:26:37 -0600", "msg_from": "Kenny H Klatt <[email protected]>", "msg_from_op": true, "msg_subject": "Possible OT - Benchmark test for 7.3 Beta 3" }, { "msg_contents": "On Tue, Oct 29, 2002 at 08:26:37AM -0600, Kenny H Klatt wrote:\n> Hi:\n> \tNot being too versed with postgres and beta tests, is there \n> a place or person who would review the results of the benchmark test for 7.3?\n\nI'm unaware of anyone having done a benchmark. If you know of one,\nplease share it with all of us.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 29 Oct 2002 10:15:56 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible OT - Benchmark test for 7.3 Beta 3" } ]
[ { "msg_contents": "\nHi ,\n\nFor a particular table it was only dump and reload of the table that\nhelped in enabling index usage.\n\nI tried VACUUM ANALYZE and even recreating the indexes but it\ndid not work.\n\nwhy does the planner use the index like a miser?\nbelow are the details\n\nwas there anything bettwer i could have done for indexes getting used?\n\n\nregds\nmallah.\n\nQuery:\n\nexplain SELECT count( email_id ) from email_bank_mailing_lists where query_id='499';\nNOTICE: QUERY PLAN:\n\nAggregate (cost=4330.48..4330.48 rows=1 width=4)\n -> Index Scan using email_bank_ml_qid on email_bank_mailing_lists (cost=0.00..4327.28\n rows=1282 width=4)\nEXPLAIN\n\n\ndistribution of query_id in table:\ntotal: 256419\n\nquery_id | count(*)\n----------------------\n 298 | 6167\n 328 | 2083\n 354 | 9875\n 404 | 6974\n 432 | 5059\n 437 | 2497\n 440 | 2837\n 448 | 14624\n 449 | 13053\n 454 | 409\n 455 | 3725\n 456 | 560\n 458 | 3477\n 460 | 5561\n 486 | 41842\n 488 | 63642\n 492 | 2244\n 493 | 6047\n 494 | 37415\n 499 | 25010\n 501 | 3318\n\n\nbefore dump reload:\ntradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists;\nNOTICE: --Relation email_bank_mailing_lists--\nNOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\nNOTICE: Analyzing email_bank_mailing_lists\nVACUUM\ntradein_clients=# explain SELECT count( email_id ) from email_bank_mailing_lists where\nquery_id=499;NOTICE: QUERY PLAN:\n\nAggregate (cost=6863.24..6863.24 rows=1 width=4)\n -> Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001 width=4)\n\nEXPLAIN\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n\n", "msg_date": "Fri, 1 Nov 2002 16:45:43 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Is dump-reload the only cure?" }, { "msg_contents": "On Fri, 2002-11-01 at 06:15, [email protected] wrote:\n\n\nLooks like a borderline case. See the costs of the index scan and\nsequential scan are very similar. Since 499 covers nearly 1 in 10\ntuples, it's likely found on nearly every page. This should make a\nsequential scan much cheaper.\n\nHowever, if the data is clumped together (not distributed throughout the\ntable) than an index scan may be preferable. So... CLUSTER may be\nuseful to you.\n\nIn the future please 'explain analyze' the queries you're looking at to\nsee actual costs as compared to the estimated cost.\n\n\n> 499 | 25010\n> 501 | 3318\n> \n> \n> before dump reload:\n> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists;\n> NOTICE: --Relation email_bank_mailing_lists--\n> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n> Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\n> NOTICE: Analyzing email_bank_mailing_lists\n> VACUUM\n> tradein_clients=# explain SELECT count( email_id ) from email_bank_mailing_lists where\n> query_id=499;NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=6863.24..6863.24 rows=1 width=4)\n> -> Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001 width=4)\n> \n> EXPLAIN\n\n-- \n Rod Taylor\n\n", "msg_date": "01 Nov 2002 07:52:40 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "\nHi Rod ,\n\nDoes it means that index scan is used for less frequenlty occuring data?\nyes my table was not clustered.\n\ncan u tell me what does 0.00..6788.24 and rows and width means?\n\nin explain out put cost=0.00..6788.24 rows=30001 width=4\n\n\nI have one more table where i face the similar problem , i have not dump - reloaded\nit yet , will post again if i face the problem.\n\n\nthanks\n\nRegds\nMallah.\n\n\n> On Fri, 2002-11-01 at 06:15, [email protected] wrote:\n>\n>\n> Looks like a borderline case. See the costs of the index scan and sequential scan are very\n> similar. Since 499 covers nearly 1 in 10 tuples, it's likely found on nearly every page. This\n> should make a sequential scan much cheaper.\n>\n> However, if the data is clumped together (not distributed throughout the table) than an index\n> scan may be preferable. So... CLUSTER may be useful to you.\n>\n> In the future please 'explain analyze' the queries you're looking at to see actual costs as\n> compared to the estimated cost.\n>\n>\n>> 499 | 25010\n>> 501 | 3318\n>>\n>>\n>> before dump reload:\n>> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists; NOTICE: --Relation\n>> email_bank_mailing_lists--\n>> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n>> Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\n>> NOTICE: Analyzing email_bank_mailing_lists\n>> VACUUM\n>> tradein_clients=# explain SELECT count( email_id ) from email_bank_mailing_lists where\n>> query_id=499;NOTICE: QUERY PLAN:\n>>\n>> Aggregate (cost=6863.24..6863.24 rows=1 width=4)\n>> -> Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001 width=4)\n>>\n>> EXPLAIN\n>\n> --\n> Rod Taylor\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n\n", "msg_date": "Fri, 1 Nov 2002 18:33:36 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "\n\nRod ,\n\nClustering did work for my other case ;-)\n\n\ntradein_clients=> explain analyze SELECT count(*) from email_source where source_id=173;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=13042.91..13042.91 rows=1 width=0) (actual time=1415.32..1415.32 rows=1 loops=1)\n -> Seq Scan on email_source (cost=0.00..12964.48 rows=31375 width=0) (actual\n time=1.19..1368.58 rows=32851 loops=1)Total runtime: 1415.42 msec\n\nEXPLAIN\ntradein_clients=> \\d email_source\n Table \"email_source\"\n Column | Type | Modifiers\n-----------+---------+-----------\n email_id | integer |\n source_id | integer |\nIndexes: email_source_sid\nUnique keys: email_source_idx\ntradein_clients=> CLUSTER email_source_sid on email_source ;\nCLUSTER\ntradein_clients=>\ntradein_clients=> explain analyze SELECT count(*) from email_source where source_id=173;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=11458.83..11458.83 rows=1 width=0) (actual time=207.73..207.73 rows=1 loops=1)\n -> Index Scan using email_source_sid on email_source (cost=0.00..11449.76 rows=3627 width=0)\n (actual time=0.27..161.04 rows=32851 loops=1)Total runtime: 207.90 msec\nEXPLAIN\n\n\nDoes it Mean that clustered indexes are guarrented to be used for index scan?\none more thing does clustering means that all future data addition will happen\nin the ordered manner only i mean consecutively in terms of source_id?\n\nRegds\nMALLAH.\n\n\n\n\n\n\n\n\n\n\n\n> On Fri, 2002-11-01 at 06:15, [email protected] wrote:\n>\n>\n> Looks like a borderline case. See the costs of the index scan and sequential scan are very\n> similar. Since 499 covers nearly 1 in 10 tuples, it's likely found on nearly every page. This\n> should make a sequential scan much cheaper.\n>\n> However, if the data is clumped together (not distributed throughout the table) than an index\n> scan may be preferable. So... CLUSTER may be useful to you.\n>\n> In the future please 'explain analyze' the queries you're looking at to see actual costs as\n> compared to the estimated cost.\n>\n>\n>> 499 | 25010\n>> 501 | 3318\n>>\n>>\n>> before dump reload:\n>> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists; NOTICE: --Relation\n>> email_bank_mailing_lists--\n>> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n>> Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\n>> NOTICE: Analyzing email_bank_mailing_lists\n>> VACUUM\n>> tradein_clients=# explain SELECT count( email_id ) from email_bank_mailing_lists where\n>> query_id=499;NOTICE: QUERY PLAN:\n>>\n>> Aggregate (cost=6863.24..6863.24 rows=1 width=4)\n>> -> Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001 width=4)\n>>\n>> EXPLAIN\n>\n> --\n> Rod Taylor\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n\n", "msg_date": "Fri, 1 Nov 2002 18:45:22 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "See Paragraph 2 of the description section:\nhttp://www.postgresql.org/idocs/index.php?sql-explain.html\n\nIn the above is a good explanation of 'cost'. Rows is the number of\nrows estimated to be returned, and width is the expected number of\ncolumns it needs to deal with at that point.\n\nOn Fri, 2002-11-01 at 08:03, [email protected] wrote:\n> \n> Hi Rod ,\n> \n> Does it means that index scan is used for less frequenlty occuring data?\n> yes my table was not clustered.\n> \n> can u tell me what does 0.00..6788.24 and rows and width means?\n> \n> in explain out put cost=0.00..6788.24 rows=30001 width=4\n> \n> \n> I have one more table where i face the similar problem , i have not dump - reloaded\n> it yet , will post again if i face the problem.\n> \n> \n> thanks\n> \n> Regds\n> Mallah.\n> \n> \n> > On Fri, 2002-11-01 at 06:15, [email protected] wrote:\n> >\n> >\n> > Looks like a borderline case. See the costs of the index scan and sequential scan are very\n> > similar. Since 499 covers nearly 1 in 10 tuples, it's likely found on nearly every page. This\n> > should make a sequential scan much cheaper.\n> >\n> > However, if the data is clumped together (not distributed throughout the table) than an index\n> > scan may be preferable. So... CLUSTER may be useful to you.\n> >\n> > In the future please 'explain analyze' the queries you're looking at to see actual costs as\n> > compared to the estimated cost.\n> >\n> >\n> >> 499 | 25010\n> >> 501 | 3318\n> >>\n> >>\n> >> before dump reload:\n> >> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists; NOTICE: --Relation\n> >> email_bank_mailing_lists--\n> >> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n> >> Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\n> >> NOTICE: Analyzing email_bank_mailing_lists\n> >> VACUUM\n> >> tradein_clients=# explain SELECT count( email_id ) from email_bank_mailing_lists where\n> >> query_id=499;NOTICE: QUERY PLAN:\n> >>\n> >> Aggregate (cost=6863.24..6863.24 rows=1 width=4)\n> >> -> Seq Scan on email_bank_mailing_lists (cost=0.00..6788.24 rows=30001 width=4)\n> >>\n> >> EXPLAIN\n> >\n> > --\n> > Rod Taylor\n> \n> \n> \n> -----------------------------------------\n> Get your free web based email at trade-india.com.\n> \"India's Leading B2B eMarketplace.!\"\n> http://www.trade-india.com/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n-- \n Rod Taylor\n\n", "msg_date": "01 Nov 2002 09:07:41 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "On Fri, 2002-11-01 at 08:03, [email protected] wrote:\n> \n> Hi Rod ,\n> \n> Does it means that index scan is used for less frequenlty occuring data?\n> yes my table was not clustered.\n> \n> can u tell me what does 0.00..6788.24 and rows and width means?\n> \n> in explain out put cost=0.00..6788.24 rows=30001 width=4\n> \n> \n> I have one more table where i face the similar problem , i have not dump - reloaded\n> it yet , will post again if i face the problem.\n\nKeep in mind that an index scan is very expensive in regards to a single\ntuple. It has to run through (fetch) the index pages, then fetch the\npages from the table. Since the table fetches are random, the harddrive\nwill probably incur a seek for each tuple found in the index. The seeks\nadd up much quicker than a sequential scan (without nearly as many seeks\nor drive head movements).\n \n-- \n Rod Taylor\n\n", "msg_date": "01 Nov 2002 09:11:03 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "> Does it Mean that clustered indexes are guarrented to be used for index scan?\n> one more thing does clustering means that all future data addition will happen\n> in the ordered manner only i mean consecutively in terms of source_id?\n\nNo, but clustering a table allows an index scan to visit less pages, and\nmake less disk seeks. This in turn makes it a better choice for some\nqueries due to the current layout of tuples on the disk. However, there\nare new borderline cases -- just in different places than before.\n\n-- \n Rod Taylor\n\n", "msg_date": "01 Nov 2002 09:38:51 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is dump-reload the only cure?" }, { "msg_contents": "\n\nThanks for the insight Rod.\nis there any other place i can know more abt these principles?\n\nBut if the table is clustered then the pages are stored catagiously\nwith respect to that column rite?\n\n\n> On Fri, 2002-11-01 at 08:03, [email protected] wrote:\n>>\n>> Hi Rod ,\n>>\n>> Does it means that index scan is used for less frequenlty occuring data? yes my table was not\n>> clustered.\n>>\n>> can u tell me what does 0.00..6788.24 and rows and width means?\n>>\n>> in explain out put cost=0.00..6788.24 rows=30001 width=4\n>>\n>>\n>> I have one more table where i face the similar problem , i have not dump - reloaded it yet ,\n>> will post again if i face the problem.\n>\n> Keep in mind that an index scan is very expensive in regards to a single tuple. It has to run\n> through (fetch) the index pages, then fetch the pages from the table. Since the table fetches\n> are random, the harddrive will probably incur a seek for each tuple found in the index. The\n> seeks add up much quicker than a sequential scan (without nearly as many seeks or drive head\n> movements).\n>\n> --\n> Rod Taylor\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n\n", "msg_date": "Fri, 1 Nov 2002 20:11:22 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" }, { "msg_contents": "On Fri, 2002-11-01 at 06:15, [email protected] wrote:\n> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists;\n> NOTICE: --Relation email_bank_mailing_lists--\n> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.\n> Total CPU 0.24s/0.04u sec elapsed 0.30 sec.\n> NOTICE: Analyzing email_bank_mailing_lists\n> VACUUM\n\nI'd suggest running a vacuum full and then running vacuum analyze more\noften. 44822 unused tuples seems quite excessive to me...\n\nRobert Treat\n\n\n\n", "msg_date": "01 Nov 2002 14:54:05 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Is dump-reload the only cure?" } ]
[ { "msg_contents": "Does it make a performance difference if I use a char(20) or a char(36)\nas the primary key? My thought is no, but I would like to hear more\nopinions.\n\nAnd a little further off topic(since we have many database experts\nhere), does it matter on MS SQL server 7?\n\nThanks!\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "01 Nov 2002 15:18:01 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": true, "msg_subject": "Does this matter?" }, { "msg_contents": "Wei,\n\n> Does it make a performance difference if I use a char(20) or a char(36)\n> as the primary key? My thought is no, but I would like to hear more\n> opinions.\n\nYes, it does, though probably minor unless you have millions of records. CHAR \nis padded out to the specified length. Therefore the index on a char(36) \ncolumn will be a little larger, and thus a little slower, than the char(20). \n\nNow, there would be no difference between VARCHAR(20) and VARCHAR(36) unless \nyou used some of the extra 16 characters on most rows.\n\nEither way, for tables of a few thousand records, I doubt that you'll notice \nthe difference. BTW, why not use a SERIAL value as a surrogate primary key?\n\n> And a little further off topic(since we have many database experts\n> here), does it matter on MS SQL server 7?\n\nYes, same reason.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 1 Nov 2002 12:23:48 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "Josh:\n\nSince I need to use a GUID as the primary key, I have to use the char\ndatatype.\n\nOn Fri, 2002-11-01 at 15:23, Josh Berkus wrote:\n> Wei,\n> \n> > Does it make a performance difference if I use a char(20) or a char(36)\n> > as the primary key? My thought is no, but I would like to hear more\n> > opinions.\n> \n> Yes, it does, though probably minor unless you have millions of records. CHAR \n> is padded out to the specified length. Therefore the index on a char(36) \n> column will be a little larger, and thus a little slower, than the char(20). \nDoes it affect the INSERT/UPDATE/DELETE operations on tables or simply\nthe SELECT operation or both?\n\n> \n> Now, there would be no difference between VARCHAR(20) and VARCHAR(36) unless \n> you used some of the extra 16 characters on most rows.\n> \n> Either way, for tables of a few thousand records, I doubt that you'll notice \n> the difference. BTW, why not use a SERIAL value as a surrogate primary key?\n> \n> > And a little further off topic(since we have many database experts\n> > here), does it matter on MS SQL server 7?\n> \n> Yes, same reason.\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "01 Nov 2002 15:52:22 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "> Wei,\n>\n> > Does it make a performance difference if I use a char(20) or a char(36)\n> > as the primary key? My thought is no, but I would like to hear more\n> > opinions.\n>\n> Yes, it does, though probably minor unless you have millions of records. CHAR\n> is padded out to the specified length. Therefore the index on a char(36)\n> column will be a little larger, and thus a little slower, than the char(20).\n>\n\nReally? According to this url (search for \"Tip\") there is no performance\ndifference just a space difference. I don't know for sure either way, but\nif there is a difference the manual needs updating.\n\nhttp://www.postgresql.org/idocs/index.php?datatype-character.html\n\n-philip\n\n", "msg_date": "Fri, 1 Nov 2002 12:53:29 -0800 (PST)", "msg_from": "Philip Hallstrom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "On Fri, Nov 01, 2002 at 12:53:29PM -0800, Philip Hallstrom wrote:\n\n> > is padded out to the specified length. Therefore the index on a char(36)\n> > column will be a little larger, and thus a little slower, than the char(20).\n> >\n> \n> Really? According to this url (search for \"Tip\") there is no performance\n> difference just a space difference. I don't know for sure either way, but\n> if there is a difference the manual needs updating.\n\nHmm. Maybe a clarification, but I don't think this is quite what the\ntip is talking about. The tip points out that part of the cost is\n\"the increased storage\" from the blank-padded type (char) as\ncontrasted with non-padded types (like text). The tip isn't talking\nabout whether a length of 20 is faster than a length of 36. Anyway,\nI can't really believe the length would be a big deal except on\nreally huge tables.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 1 Nov 2002 16:10:57 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "\nPhillip,\n\n> Really? According to this url (search for \"Tip\") there is no performance\n> difference just a space difference. I don't know for sure either way, but\n> if there is a difference the manual needs updating.\n> \n> http://www.postgresql.org/idocs/index.php?datatype-character.html\n\nActually, that note is intended to tell people that CHAR is not any faster \nthan VARCHAR for the same-length string ... since CHAR *is* faster than \nVARCHAR in some systems, like MS SQL Server.\n\n-- \n-Josh Berkus\n\n\n", "msg_date": "Fri, 1 Nov 2002 14:00:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "Andrew Sullivan wrote:\n> Hmm. Maybe a clarification, but I don't think this is quite what the\n> tip is talking about. The tip points out that part of the cost is\n> \"the increased storage\" from the blank-padded type (char) as\n> contrasted with non-padded types (like text). The tip isn't talking\n> about whether a length of 20 is faster than a length of 36. Anyway,\n> I can't really believe the length would be a big deal except on\n> really huge tables.\n\nIt really depends on the access. I spend quite a bit of time optimizing\ndatabase internals and the size of an index matters much more than is\napparent in certain cases. This is especially true for medium sized tables.\n\nThe real issue is the number of reads required to find a particular entry in\nthe index.\n\nAssume a btree that tries to be 70% full. Assume 40 bytes for a header, 8\nbytes overhead per index entry and an 8K btree page.\n\nThe following represents the number of index entries that can be contained in\nboth a two level and a three level btree.\n\n Type Bytes Items per page 2 3\n ---- ------ ----- ------ ----------\n char(36) 40 129 16,641 2,146,689\n char(20) 24 203 41,209 8,365,427\n\nDepending on the size of the table, the number of pages in the btree affect\nperformance in two separate ways:\n\n1) Cache hit ratio - This greatly depends on the way the tables are accessed\nbut more densely packed btree indices are used more often and more likely to\nbe present in a cache than less densely packed indices.\n\n2) I/O time - If the number of items reaches a particular size then the btree\nwill add an additional level which could result in a very expensive I/O\noperation per access. How this affects performance depends very specifically\non the way the index is used.\n\nThe problem is not necessarily the size of the table but the transitions in\nnumbers of levels in the btree. For a table size of 200 to 15,000 tuples,\nthere won't be a major difference.\n\nFor a table size of 25,000 to 40,000 tuples, and assuming the root page is\ncached, an index lookup can be twice as fast with a char(20) as it is for a\nchar(36) because in the one case a two-level btree handles the table while a\nthree-level btree is needed for the other.\n\nThis won't typically affect multi-user throughput as much since other\nbackends will be working while the I/O's are waiting but it might affect the\nperformance as seen from a single client.\n\n- Curtis\n\n\n", "msg_date": "Fri, 1 Nov 2002 18:00:19 -0400", "msg_from": "\"Curtis Faith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "\nWei,\n\n> Does it affect the INSERT/UPDATE/DELETE operations on tables or simply\n> the SELECT operation or both?\n\nAll of the above. How many rows are we talking about, anyway? The difference \nmay be academic.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 1 Nov 2002 14:01:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "Wei Weng <[email protected]> writes:\n> Since I need to use a GUID as the primary key, I have to use the char\n> datatype.\n\nTry uniqueidentifier:\n\n http://archives.postgresql.org/pgsql-announce/2002-07/msg00001.php\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "03 Nov 2002 23:08:17 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does this matter?" }, { "msg_contents": "Thanks, I noticed that sweet addon and will try to integrate it into our \nsystem once 7.3 is officially released. :)\n\nbtw, do we have a release date yet?\n\nThanks\n\n\nWei\n\nAt 11:08 PM 11/3/2002 -0500, you wrote:\n>Wei Weng <[email protected]> writes:\n> > Since I need to use a GUID as the primary key, I have to use the char\n> > datatype.\n>\n>Try uniqueidentifier:\n>\n> http://archives.postgresql.org/pgsql-announce/2002-07/msg00001.php\n>\n>Cheers,\n>\n>Neil\n>\n>--\n>Neil Conway <[email protected]> || PGP Key ID: DB3C29FC\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n", "msg_date": "Sun, 03 Nov 2002 23:33:18 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Does this matter?" } ]
[ { "msg_contents": "\nhi all,\n\nwill an upgrade to a dual processor machine\nnoticeably increase performance of a postgresql server?\n\nload average now often is about 4.0 - 8.5 - and I'll\nhave got to do something sooner or later...\n\nany help is appreciated...\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n", "msg_date": "Sat, 9 Nov 2002 13:32:53 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Upgrade to dual processor machine?" }, { "msg_contents": "Hi Henrik,\n\nIt'd be helpful to know the other specifics of the server, and a bit\nabout the workload the server has.\n\ni.e.\n\n- Processor type and speed\n- Memory\n- Disk configuration\n- OS\n\n- Do you do other stuff on it, apart from PostgreSQL?\n\n- How many clients simultaneously connecting to it?\n- What do the clients connect with? JDBC/ODBC/libpq/etc?\n\n- Have you configured the memory after installation of PostgreSQL, so\nit's better optimised than the defaults?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nHenrik Steffen wrote:\n> \n> hi all,\n> \n> will an upgrade to a dual processor machine\n> noticeably increase performance of a postgresql server?\n> \n> load average now often is about 4.0 - 8.5 - and I'll\n> have got to do something sooner or later...\n> \n> any help is appreciated...\n> \n> --\n> \n> Mit freundlichem Gru�\n> \n> Henrik Steffen\n> Gesch�ftsf�hrer\n> \n> top concepts Internetmarketing GmbH\n> Am Steinkamp 7 - D-21684 Stade - Germany\n> --------------------------------------------------------\n> http://www.topconcepts.com Tel. +49 4141 991230\n> mail: [email protected] Fax. +49 4141 991233\n> --------------------------------------------------------\n> 24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n> --------------------------------------------------------\n> Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\n> System-Partner gesucht: http://www.franchise.city-map.de\n> --------------------------------------------------------\n> Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n> --------------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 09 Nov 2002 23:50:37 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n> will an upgrade to a dual processor machine\n> noticeably increase performance of a postgresql server?\n\nAssuming you have more than 1 concurrent client, it likely\nwill. Whether it will be a huge performance improvement depends on the\nother characteristics of the workload (e.g. is it I/O bound or CPU\nbound?).\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "09 Nov 2002 13:56:33 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nHi Justin,\n\nhere a little more information:\n\n> - Processor type and speed\nIntel Pentium IV, 1.6 GHz\n\n> - Memory\n1024 MB ECC-RAM\n\n> - Disk configuration\n2 x 60 GB IDE (Raid 0)\n\n> - OS\nRedhat Linux\n\n>\n> - Do you do other stuff on it, apart from PostgreSQL?\nNo, it's a dedicated database server\n\n>\n> - How many clients simultaneously connecting to it?\none webserver with max. 50 instances, approximately 10.000 users a day,\nabout 150.000 Pageviews daily. All pages are created on the fly using\nmod_perl connecting to the db-server.\n\n> - What do the clients connect with? JDBC/ODBC/libpq/etc?\nI am using Pg.pm --- this is called libpq, isn't it?\n\n> - Have you configured the memory after installation of PostgreSQL, so\n> it's better optimised than the defaults?\nno - what should I do? Looking at 'top' right now, I see the following:\nMem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n\nSo, what do you suggest to gain more performance?\n\nThanks in advance,\n\n> Hi Henrik,\n>\n> It'd be helpful to know the other specifics of the server, and a bit\n> about the workload the server has.\n>\n> i.e.\n>\n> - Processor type and speed\n> - Memory\n> - Disk configuration\n> - OS\n>\n> - Do you do other stuff on it, apart from PostgreSQL?\n>\n> - How many clients simultaneously connecting to it?\n> - What do the clients connect with? JDBC/ODBC/libpq/etc?\n>\n> - Have you configured the memory after installation of PostgreSQL, so\n> it's better optimised than the defaults?\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Henrik Steffen wrote:\n> >\n> > hi all,\n> >\n> > will an upgrade to a dual processor machine\n> > noticeably increase performance of a postgresql server?\n> >\n> > load average now often is about 4.0 - 8.5 - and I'll\n> > have got to do something sooner or later...\n> >\n> > any help is appreciated...\n> >\n> > --\n> >\n> > Mit freundlichem Gru�\n> >\n> > Henrik Steffen\n> > Gesch�ftsf�hrer\n> >\n> > top concepts Internetmarketing GmbH\n> > Am Steinkamp 7 - D-21684 Stade - Germany\n> > --------------------------------------------------------\n> > http://www.topconcepts.com Tel. +49 4141 991230\n> > mail: [email protected] Fax. +49 4141 991233\n> > --------------------------------------------------------\n> > 24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n> > --------------------------------------------------------\n> > Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\n> > System-Partner gesucht: http://www.franchise.city-map.de\n> > --------------------------------------------------------\n> > Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n> > --------------------------------------------------------\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n\n", "msg_date": "Mon, 11 Nov 2002 08:05:27 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n> > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> I am using Pg.pm --- this is called libpq, isn't it?\n\nWell, it's a thin Perl wrapper over libpq (which is the C client\nAPI). You said you're using mod_perl: you may wish to consider using\nDBI and DBD::Pg instead of Pg.pm, so you can make use of persistent\nconnections using Apache::DBI.\n\n> > - Have you configured the memory after installation of PostgreSQL, so\n> > it's better optimised than the defaults?\n\n> no - what should I do? Looking at 'top' right now, I see the following:\n> Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n\nNo, Justin is referring to the memory-related configuration options in\npostgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the\nlike.\n\n> So, what do you suggest to gain more performance?\n\nIMHO, dual processors would likely be a good performance improvement.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "11 Nov 2002 02:32:43 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "Hi Henrik,\n\nOk, you're machine is doing a decent amount of work, and will need\nlooking at carefully.\n\nGoing to get more specific about some stuff, as it'll definitely assist\nwith giving you proper guidance here.\n\n- Have you run any system-performance tools apart from top, to figure\nout how the various parts of your system are operating?\n\nFor example, by looking into and measuring the different parts of your\nsystem, you may find you have several processes simultaneously waiting\nto execute purely because the disk drives can't keep up with the\nrequests. The solution may turn out to be upgrading your disks instead\nof your CPU's (example only). Without taking measurements to the point\nof understanding what's going on, you'll only be guessing.\n\nThe most concerning aspect at the moment is this:\n\n\"> - Have you configured the memory after installation of PostgreSQL, so\n> it's better optimised than the defaults?\nno - what should I do? Looking at 'top' right now, I see the following:\nMem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\"\n\nThis is telling me that the system is operating close to using all it's\nmemory with running processes. *Bad* for this kind of thing. The\ndefault memory configuration for PostgreSQL is very lean and causes high\nCPU load and slow throughput. You don't seem to have enough spare\nmemory at the moment to really try adjusting this upwards. :(\n\nImportant question, how much memory can you get into that server? Could\nyou do 3GB or more?\n\nSomething that would be *really nice* is if you have a second server\nwith the same configuration hanging around that you can try stuff on. \nFor example, loading it with a copy of all your data, changing the\nmemory configuration, then testing it.\n\n\nFurther system specific details needed:\n\n- Which version of the Linux kernel, and of RedHat? Different version\nof the Linux kernel do things differently. For example version 2.4.3\ndoes virtual memory differently than say version 2.4.17.\n\n\n- If you do a ps (ps -ef) during a busy time, how many instances of the\nPostgreSQL process do you see in memory? This will tell you how many\nclients have an open connection to the database at any time.\n\n\n- How much data is in your database(s)? Just to get an idea of your\nvolume of data.\n\n\n- If disk performance turns out to be the problem, would you consider\nmoving to higher-end hard drives? This will probably mean an Ultra160\nor Ultra320 SCSI card, and drives to match. That's not going to be\ntotally cheap, but if you have a decent budget then it might be ok.\n\n\nAs you can see, this could take a bit of time an effort to get right.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nHenrik Steffen wrote:\n> \n> Hi Justin,\n> \n> here a little more information:\n> \n> > - Processor type and speed\n> Intel Pentium IV, 1.6 GHz\n> \n> > - Memory\n> 1024 MB ECC-RAM\n> \n> > - Disk configuration\n> 2 x 60 GB IDE (Raid 0)\n> \n> > - OS\n> Redhat Linux\n> \n> >\n> > - Do you do other stuff on it, apart from PostgreSQL?\n> No, it's a dedicated database server\n> \n> >\n> > - How many clients simultaneously connecting to it?\n> one webserver with max. 50 instances, approximately 10.000 users a day,\n> about 150.000 Pageviews daily. All pages are created on the fly using\n> mod_perl connecting to the db-server.\n> \n> > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> I am using Pg.pm --- this is called libpq, isn't it?\n> \n> > - Have you configured the memory after installation of PostgreSQL, so\n> > it's better optimised than the defaults?\n> no - what should I do? Looking at 'top' right now, I see the following:\n> Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n> \n> So, what do you suggest to gain more performance?\n> \n> Thanks in advance,\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 11 Nov 2002 18:44:24 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "FWIW, in summer I have done a little bit of testing on one of our\ndual-cpu machines; among this I have been running OSDB (open source\ndatabase benchmark), 32 simulated clients, against Postgres (7.2.1)/Linux\n(2.4.18), once bootet with maxcpus=1 and once with maxcpus=2; if I\nremember correctly I saw something between 80-90% performance improvement\non the IR benchmark with the second cpu activated.\n\nNote the run was completely cpu-bound, neither harddisk nor memory was the\nbottleneck, so you may see less of an improvement if other parts of your\nsystem are the limit; but Postgres itself appears to make use of the\navailable cpus quite nicely.\n\nRegards\n-- \nHelge Bahmann <[email protected]> /| \\__\nThe past: Smart users in front of dumb terminals /_|____\\\n _/\\ | __)\n$ ./configure \\\\ \\|__/__|\nchecking whether build environment is sane... yes \\\\/___/ |\nchecking for AIX... no (we already did this) |\n\n", "msg_date": "Mon, 11 Nov 2002 12:56:11 +0100 (CET)", "msg_from": "Helge Bahmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On Mon, 11 Nov 2002, Henrik Steffen wrote:\n\n> > - How many clients simultaneously connecting to it?\n> one webserver with max. 50 instances, approximately 10.000 users a day,\n> about 150.000 Pageviews daily. All pages are created on the fly using\n> mod_perl connecting to the db-server.\n\nAha. What kind of web-side data caching are you doing? That alone can \ndrop your load down to < 1. Even something like a 1-hour cache, or \nsomething you can manually expire can work amazing wonders for database \nusage. So far, the only thing we've found that doesn't really fit this \nmodel are full text searches.\n\nHere, the biggest difference to our DB server was caused by *not* having \nall of our 9 webservers doing 50+ connections per second, which we \nachieved mainly through caching. Adding another CPU will work as well, \nbut as far as a long-term, not just throwing hardware at the problem \nkind of solution goes, see if you can get caching worked in there \nsomehow.\n\nSince you know you're using Pg.pm (switch to DBI::pg, trust me on this \none), you should have little problem either caching your result set or \neven the whole resulting page with select non-cachable parts. Not only \nwill that reduce page-load time, but the strain on your database as \nwell.\n\n-- \nShaun M. Thomas INN Database Administrator\nPhone: (309) 743-0812 Fax : (309) 743-0830\nEmail: [email protected] Web : www.townnews.com\n\n", "msg_date": "Mon, 11 Nov 2002 12:08:55 -0600 (CST)", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On Mon, 11 Nov 2002, Shaun Thomas wrote:\n\n> On Mon, 11 Nov 2002, Henrik Steffen wrote:\n> \n> > > - How many clients simultaneously connecting to it?\n> > one webserver with max. 50 instances, approximately 10.000 users a day,\n> > about 150.000 Pageviews daily. All pages are created on the fly using\n> > mod_perl connecting to the db-server.\n> \n> Aha. What kind of web-side data caching are you doing? That alone can \n> drop your load down to < 1. Even something like a 1-hour cache, or \n> something you can manually expire can work amazing wonders for database \n> usage. So far, the only thing we've found that doesn't really fit this \n> model are full text searches.\n> \n> Here, the biggest difference to our DB server was caused by *not* having \n> all of our 9 webservers doing 50+ connections per second, which we \n> achieved mainly through caching. Adding another CPU will work as well, \n> but as far as a long-term, not just throwing hardware at the problem \n> kind of solution goes, see if you can get caching worked in there \n> somehow.\n> \n> Since you know you're using Pg.pm (switch to DBI::pg, trust me on this \n> one), you should have little problem either caching your result set or \n> even the whole resulting page with select non-cachable parts. Not only \n> will that reduce page-load time, but the strain on your database as \n> well.\n\nAgreed. I highly recommend squid as a caching proxy. Powerful, fast, and \nOpen source. It's included in most flavors of Linux. I'm sure it's \navailable as a port if not included in most BSDs as well.\n\n", "msg_date": "Mon, 11 Nov 2002 11:25:45 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On Mon, 11 Nov 2002, Henrik Steffen wrote:\n\n> > - How many clients simultaneously connecting to it?\n> one webserver with max. 50 instances, approximately 10.000 users a day,\n> about 150.000 Pageviews daily. All pages are created on the fly using\n> mod_perl connecting to the db-server.\n\nIf you've got 50 simos, you could use more CPUs, whether your I/O bound or \nnot. \n \n> > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> I am using Pg.pm --- this is called libpq, isn't it?\n> \n> > - Have you configured the memory after installation of PostgreSQL, so\n> > it's better optimised than the defaults?\n> no - what should I do? Looking at 'top' right now, I see the following:\n> Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n\nHey, what is the \"cached\" field saying there? Is the machine caching a \nwhole bunch or just a little? If it's caching a whole bunch, look at \nincreasing your shmmax shmall settings and then the shared buffers in \npostgresql.conf for better performance.\n\n\n", "msg_date": "Tue, 12 Nov 2002 10:22:54 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nhi,\n\nthanks for this information...\n\nwe are allready using squid as a transpartent www-accelerator,\nthis works very well and squid handles about 70 % out of all hits.\n\nHowever, sometimes some search engines use to start\nindexing more than 25 DIFFERENT documents per second, this is when things\nstart getting more difficult .... we have played around a little\nwith an ip-based bandwidth-regulation tool at squid-level, which\nworks quite well - though you'll have to add new search-engines\non demand.\n\nBut anyway - we still have to look at the facts: we have had a 200 %\nincrease of visitors and pageviews during the last 6 months.\n\nUpgrading to DBI:pg is something I have been thinking about allready,\nbut as far as I know, I am allready using persistent connections with\nmod_perl and Pg.pm, am I not???!\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Shaun Thomas\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: \"Justin Clift\" <[email protected]>; <[email protected]>\nSent: Monday, November 11, 2002 7:08 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> On Mon, 11 Nov 2002, Henrik Steffen wrote:\n>\n> > > - How many clients simultaneously connecting to it?\n> > one webserver with max. 50 instances, approximately 10.000 users a day,\n> > about 150.000 Pageviews daily. All pages are created on the fly using\n> > mod_perl connecting to the db-server.\n>\n> Aha. What kind of web-side data caching are you doing? That alone can\n> drop your load down to < 1. Even something like a 1-hour cache, or\n> something you can manually expire can work amazing wonders for database\n> usage. So far, the only thing we've found that doesn't really fit this\n> model are full text searches.\n>\n> Here, the biggest difference to our DB server was caused by *not* having\n> all of our 9 webservers doing 50+ connections per second, which we\n> achieved mainly through caching. Adding another CPU will work as well,\n> but as far as a long-term, not just throwing hardware at the problem\n> kind of solution goes, see if you can get caching worked in there\n> somehow.\n>\n> Since you know you're using Pg.pm (switch to DBI::pg, trust me on this\n> one), you should have little problem either caching your result set or\n> even the whole resulting page with select non-cachable parts. Not only\n> will that reduce page-load time, but the strain on your database as\n> well.\n>\n> --\n> Shaun M. Thomas INN Database Administrator\n> Phone: (309) 743-0812 Fax : (309) 743-0830\n> Email: [email protected] Web : www.townnews.com\n>\n\n", "msg_date": "Tue, 12 Nov 2002 20:22:29 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nThe cache-field is saying 873548K cached at the moment\nIs this a \"whole bunch of cache\" in your opinion? Is it too much?\n\nSo, where do i find and change shmmax shmall settings ??\nWhat should I put there?\n\nWhat is a recommended value for shared buffers in postgresql.conf ?\n\n\nFYI:\n\nps ax | grep -c postgres ==> shows 23 at the moment\n\nhowever, w shows: load average 3.09, 2.01, 1.76\n(this is low at the moment)\n\nthanks again,\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: \"Justin Clift\" <[email protected]>; <[email protected]>\nSent: Tuesday, November 12, 2002 6:22 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> On Mon, 11 Nov 2002, Henrik Steffen wrote:\n>\n> > > - How many clients simultaneously connecting to it?\n> > one webserver with max. 50 instances, approximately 10.000 users a day,\n> > about 150.000 Pageviews daily. All pages are created on the fly using\n> > mod_perl connecting to the db-server.\n>\n> If you've got 50 simos, you could use more CPUs, whether your I/O bound or\n> not.\n>\n> > > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> > I am using Pg.pm --- this is called libpq, isn't it?\n> >\n> > > - Have you configured the memory after installation of PostgreSQL, so\n> > > it's better optimised than the defaults?\n> > no - what should I do? Looking at 'top' right now, I see the following:\n> > Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n>\n> Hey, what is the \"cached\" field saying there? Is the machine caching a\n> whole bunch or just a little? If it's caching a whole bunch, look at\n> increasing your shmmax shmall settings and then the shared buffers in\n> postgresql.conf for better performance.\n>\n>\n>\n\n", "msg_date": "Tue, 12 Nov 2002 20:27:34 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nHi Justin,\n\nthanks for your answer, I will now try to deliver some more information\nto you... but I am in particular a programmer, not a hacker ;-)) so please\nexcuse if I lack some knowledge in system things and stuff....\n\n> - Have you run any system-performance tools apart from top, to figure\n> out how the various parts of your system are operating?\n\nnope. don't know any... which would you recommend for measurement of i/o\nusage etc. ?\n\n> The solution may turn out to be upgrading your disks instead\n> of your CPU's (example only).\n\nI will at least consider this... IDE disks are not that reliable either...\n\n> Important question, how much memory can you get into that server? Could\n> you do 3GB or more?\n\nno, sorry - 1 GB is allready the upper limit... I consider migrating everything\nto a new hardware, (dual?) intel xeon with perhaps even raid-v storage system with\na new upper limit of 12 GB RAM which will give me some upgrade-possibilies ... ;-))\n\n> Something that would be *really nice* is if you have a second server\n> with the same configuration hanging around that you can try stuff on.\n> For example, loading it with a copy of all your data, changing the\n> memory configuration, then testing it.\n\nI actually DO have an identical second server, and the db is allready on it.\nhowever, the system has a few problems concerning harddisk failuers and memory\nproblems (don't ever use it for running systems!! we had this server on the list\nbefore... I almost gave up on this one, when suddenly all problems and crashes\nwere solved when moving to a different machine as suggested by tom lane ....)\n... but for some testing purpose it sould be sufficient ;-))\n\n\n\n> - Which version of the Linux kernel, and of RedHat?\n\nredhat - linux kernel 2.4.7-10\n\n\n> - If you do a ps (ps -ef) during a busy time, how many instances of the\n> PostgreSQL process do you see in memory? This will tell you how many\n> ients have an open connection to the database at any time.\n\nup to 40 clients are running... right now it's 21 processes and w shows\na load average of 1.92, 1.58, 1.59\n\n> - How much data is in your database(s)? Just to get an idea of your\n> volume of data.\n\nIt's 3.6 GB at the moment in one database in 98 user tables.\n\n> - If disk performance turns out to be the problem, would you consider\n> moving to higher-end hard drives\n\nallready considering ....\n\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Justin Clift\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, November 11, 2002 8:44 AM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> Hi Henrik,\n>\n> Ok, you're machine is doing a decent amount of work, and will need\n> looking at carefully.\n>\n> Going to get more specific about some stuff, as it'll definitely assist\n> with giving you proper guidance here.\n>\n> - Have you run any system-performance tools apart from top, to figure\n> out how the various parts of your system are operating?\n>\n> For example, by looking into and measuring the different parts of your\n> system, you may find you have several processes simultaneously waiting\n> to execute purely because the disk drives can't keep up with the\n> requests. The solution may turn out to be upgrading your disks instead\n> of your CPU's (example only). Without taking measurements to the point\n> of understanding what's going on, you'll only be guessing.\n>\n> The most concerning aspect at the moment is this:\n>\n> \"> - Have you configured the memory after installation of PostgreSQL, so\n> > it's better optimised than the defaults?\n> no - what should I do? Looking at 'top' right now, I see the following:\n> Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\"\n>\n> This is telling me that the system is operating close to using all it's\n> memory with running processes. *Bad* for this kind of thing. The\n> default memory configuration for PostgreSQL is very lean and causes high\n> CPU load and slow throughput. You don't seem to have enough spare\n> memory at the moment to really try adjusting this upwards. :(\n>\n> Important question, how much memory can you get into that server? Could\n> you do 3GB or more?\n>\n> Something that would be *really nice* is if you have a second server\n> with the same configuration hanging around that you can try stuff on.\n> For example, loading it with a copy of all your data, changing the\n> memory configuration, then testing it.\n>\n>\n> Further system specific details needed:\n>\n> - Which version of the Linux kernel, and of RedHat? Different version\n> of the Linux kernel do things differently. For example version 2.4.3\n> does virtual memory differently than say version 2.4.17.\n>\n>\n> - If you do a ps (ps -ef) during a busy time, how many instances of the\n> PostgreSQL process do you see in memory? This will tell you how many\n> clients have an open connection to the database at any time.\n>\n>\n> - How much data is in your database(s)? Just to get an idea of your\n> volume of data.\n>\n>\n> - If disk performance turns out to be the problem, would you consider\n> moving to higher-end hard drives? This will probably mean an Ultra160\n> or Ultra320 SCSI card, and drives to match. That's not going to be\n> totally cheap, but if you have a decent budget then it might be ok.\n>\n>\n> As you can see, this could take a bit of time an effort to get right.\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> Henrik Steffen wrote:\n> >\n> > Hi Justin,\n> >\n> > here a little more information:\n> >\n> > > - Processor type and speed\n> > Intel Pentium IV, 1.6 GHz\n> >\n> > > - Memory\n> > 1024 MB ECC-RAM\n> >\n> > > - Disk configuration\n> > 2 x 60 GB IDE (Raid 0)\n> >\n> > > - OS\n> > Redhat Linux\n> >\n> > >\n> > > - Do you do other stuff on it, apart from PostgreSQL?\n> > No, it's a dedicated database server\n> >\n> > >\n> > > - How many clients simultaneously connecting to it?\n> > one webserver with max. 50 instances, approximately 10.000 users a day,\n> > about 150.000 Pageviews daily. All pages are created on the fly using\n> > mod_perl connecting to the db-server.\n> >\n> > > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> > I am using Pg.pm --- this is called libpq, isn't it?\n> >\n> > > - Have you configured the memory after installation of PostgreSQL, so\n> > > it's better optimised than the defaults?\n> > no - what should I do? Looking at 'top' right now, I see the following:\n> > Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n> >\n> > So, what do you suggest to gain more performance?\n> >\n> > Thanks in advance,\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 12 Nov 2002 20:42:08 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nhello,\n\nAm I not allready using persistent connections with Pg.pm ?\n\nIt looks at least like it.... I only need a new connection\nfrom webserver to db-server once a new webserver child is born.\n\nwell, anyway i am consindering updating to DBD::Pg of course...\nit's only to change about 100.000 lines of perl code ....\n\n\n> No, Justin is referring to the memory-related configuration options in\n> postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the\n> like.\n\nso, how am i supposed to tune these settings ??\n\nthanks again,\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Neil Conway\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: \"Justin Clift\" <[email protected]>; <[email protected]>\nSent: Monday, November 11, 2002 8:32 AM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> \"Henrik Steffen\" <[email protected]> writes:\n> > > - What do the clients connect with? JDBC/ODBC/libpq/etc?\n> > I am using Pg.pm --- this is called libpq, isn't it?\n>\n> Well, it's a thin Perl wrapper over libpq (which is the C client\n> API). You said you're using mod_perl: you may wish to consider using\n> DBI and DBD::Pg instead of Pg.pm, so you can make use of persistent\n> connections using Apache::DBI.\n>\n> > > - Have you configured the memory after installation of PostgreSQL, so\n> > > it's better optimised than the defaults?\n>\n> > no - what should I do? Looking at 'top' right now, I see the following:\n> > Mem 1020808K av, 1015840K used, 4968K free, 1356K shrd, 32852K buff\n>\n> No, Justin is referring to the memory-related configuration options in\n> postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the\n> like.\n>\n> > So, what do you suggest to gain more performance?\n>\n> IMHO, dual processors would likely be a good performance improvement.\n>\n> Cheers,\n>\n> Neil\n>\n> --\n> Neil Conway <[email protected]> || PGP Key ID: DB3C29FC\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Tue, 12 Nov 2002 20:45:12 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n> > No, Justin is referring to the memory-related configuration options in\n> > postgresql.conf, like shared_buffers, wal_buffers, sort_mem, and the\n> > like.\n> \n> so, how am i supposed to tune these settings ??\n\npostgresql.conf\n\nSee the documentation:\n\n http://developer.postgresql.org/docs/postgres/runtime-config.html\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "12 Nov 2002 22:37:35 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "> The cache-field is saying 873548K cached at the moment\n> Is this a \"whole bunch of cache\" in your opinion? Is it too much?\n\n Too much cache? It ain't possible. ; )\n\n For what it's worth, my DB machine generally uses about 1.25 gigs for\ndisk cache, in addition to the 64 megs that are on the RAID card, and\nthat's just fine with me. I allocate 256 megs of shared memory (32768\nbuffers), and the machine hums along very nicely. vmstat shows that\nactual reads to the disk are *extremely* rare, and the writes that come\nfrom inserts/etc. are nicely buffered.\n\n Here's how I chose 256 megs for shared buffers: First, I increased the\nshared buffer amount until I didn't see any more performance benefits.\nThen I doubled it just for fun. ; )\n\n Again, in your message it seemed like you were doing quite a bit of\nwrites - have you disabled fsync, and what sort of disk system do you\nhave?\n\nsteve\n\n", "msg_date": "Thu, 14 Nov 2002 11:46:15 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nhi steve,\n\nwhy fsync? - what's fsync? never heard of it... google tells\nme something about syncing of remote hosts ... so why should I\nactivate it ?? ... I conclude, it's probably disabled because\nI don't know what it is ....\n\nit's a raid-1 ide system\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Steve Wolfe\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, November 14, 2002 7:46 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> > The cache-field is saying 873548K cached at the moment\n> > Is this a \"whole bunch of cache\" in your opinion? Is it too much?\n>\n> Too much cache? It ain't possible. ; )\n>\n> For what it's worth, my DB machine generally uses about 1.25 gigs for\n> disk cache, in addition to the 64 megs that are on the RAID card, and\n> that's just fine with me. I allocate 256 megs of shared memory (32768\n> buffers), and the machine hums along very nicely. vmstat shows that\n> actual reads to the disk are *extremely* rare, and the writes that come\n> from inserts/etc. are nicely buffered.\n>\n> Here's how I chose 256 megs for shared buffers: First, I increased the\n> shared buffer amount until I didn't see any more performance benefits.\n> Then I doubled it just for fun. ; )\n>\n> Again, in your message it seemed like you were doing quite a bit of\n> writes - have you disabled fsync, and what sort of disk system do you\n> have?\n>\n> steve\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Thu, 14 Nov 2002 20:26:11 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n\n> hi steve,\n> \n> why fsync? - what's fsync? never heard of it... google tells\n> me something about syncing of remote hosts ... so why should I\n> activate it ?? ... I conclude, it's probably disabled because\n> I don't know what it is ....\n\nfsync() is a system call that flushes a file's contents from the\nbuffer cache to disk. PG uses it to ensure consistency in the WAL\nfiles. It is enabled by default. Do NOT disable it unless you know\nexactly what you are doing and are prepared to sacrifice some data\nintegrity for performance.\n\n-Doug\n", "msg_date": "14 Nov 2002 14:35:52 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On Thu, 14 Nov 2002, Henrik Steffen wrote:\n\n> \n> hi steve,\n> \n> why fsync? - what's fsync? never heard of it... google tells\n> me something about syncing of remote hosts ... so why should I\n> activate it ?? ... I conclude, it's probably disabled because\n> I don't know what it is ....\n> \n> it's a raid-1 ide system\n\nfsync is enabled by default. fsync flushes disk buffers after every \nwrite. Turning it off lets the OS flush buffers at its leisure. setting \nfsync=false will often double the write performance and since writes are \nrunning faster, there's more bandwidth for the reads as well, so \neverything goes faster.\n\nDefinitely look at putting your data onto a Ultra160 SCSI 15krpm RAID1 \nset. My dual 80 Gig Ultra100 IDEs can get about 30 Megs a second in a \nRAID1 for raw reads under bonnie++, while my pair of Ultra80 10krpm 18 gig \nscsis can get about 48 Megs a second raw read.\n\nPlus SCSI is usually MUCH faster for writes than IDE.\n\n", "msg_date": "Thu, 14 Nov 2002 12:54:38 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On 14 Nov 2002, Doug McNaught wrote:\n\n> \"Henrik Steffen\" <[email protected]> writes:\n> \n> > hi steve,\n> > \n> > why fsync? - what's fsync? never heard of it... google tells\n> > me something about syncing of remote hosts ... so why should I\n> > activate it ?? ... I conclude, it's probably disabled because\n> > I don't know what it is ....\n> \n> fsync() is a system call that flushes a file's contents from the\n> buffer cache to disk. PG uses it to ensure consistency in the WAL\n> files. It is enabled by default. Do NOT disable it unless you know\n> exactly what you are doing and are prepared to sacrifice some data\n> integrity for performance.\n\nI thought the danger with WAL was minimized to the point of not being an \nissue anymore. Tom?\n\n", "msg_date": "Thu, 14 Nov 2002 12:58:41 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> On 14 Nov 2002, Doug McNaught wrote:\n>> fsync() is a system call that flushes a file's contents from the\n>> buffer cache to disk. PG uses it to ensure consistency in the WAL\n>> files. It is enabled by default. Do NOT disable it unless you know\n>> exactly what you are doing and are prepared to sacrifice some data\n>> integrity for performance.\n\n> I thought the danger with WAL was minimized to the point of not being an \n> issue anymore. Tom?\n\nActually, more the other way 'round: WAL minimizes the cost of using\nfsync, since we now only need to fsync the WAL file and not anything\nelse. The risk of not using it is still data corruption --- mainly\nbecause without fsync, we can't be certain that WAL writes hit disk\nin advance of the corresponding data-page changes. If you have a crash,\nthe system will replay the log as far as it can; but if there are\nadditional unlogged changes in the data files, you might have\ninconsistencies.\n\nI'd definitely recommend keeping fsync on in any production\ninstallation. For development maybe you don't care about data loss...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Nov 2002 15:19:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine? " }, { "msg_contents": "> fsync() is a system call that flushes a file's contents from the\n> buffer cache to disk. PG uses it to ensure consistency in the WAL\n> files. It is enabled by default. Do NOT disable it unless you know\n> exactly what you are doing and are prepared to sacrifice some data\n> integrity for performance.\n\n The only issue of data integrity is in the case of an unclean shutdown,\nlike a power failure or a crash. PG and my OS are reliable enough that I\ntrust them not to crash, and my hardware has proven itself as well. Of\ncourse, as you point out, if someone doesn't trust their server, they're\ntaking chances.\n\n That being said, even on other machines with fsync turned off and\nunclean shutdowns (power cycles, etc.), I have yet to run into any problem\nwith PG's consistency, although I certainly cannot guarantee that would be\nthe case for anyone else!\n\nsteve\n\n", "msg_date": "Thu, 14 Nov 2002 17:33:35 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "> fsync is enabled by default. fsync flushes disk buffers after every\n> write. Turning it off lets the OS flush buffers at its leisure.\nsetting\n> fsync=false will often double the write performance and since writes are\n> running faster, there's more bandwidth for the reads as well, so\n> everything goes faster.\n\n \"doubling performance\" is very conservative, I've seen it give more than\na tenfold increase in performance on large insert/update batches. Of\ncourse, the exact figure depends on a lot of hardware and OS factors.\n\n> Definitely look at putting your data onto a Ultra160 SCSI 15krpm RAID1\n> set. My dual 80 Gig Ultra100 IDEs can get about 30 Megs a second in a\n> RAID1 for raw reads under bonnie++, while my pair of Ultra80 10krpm 18\ngig\n> scsis can get about 48 Megs a second raw read.\n\n If you trust the hardware, disabling fsync and using copious quantities\nof cache/buffer can almost eliminate actual disk access. My DB machine\nwill quickly blip the lights on the RAID array once a minute or so, but\nthat's about it. All of the actual work is happening from RAM. Of\ncourse, with obscenely large data sets, that becomes difficult to achieve.\n\nsteve\n\n", "msg_date": "Thu, 14 Nov 2002 17:38:13 -0700", "msg_from": "\"Steve Wolfe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" } ]
[ { "msg_contents": "Hi,\n\n I have table with 30 columns and 30000..500000 rows. When I make \n'SELECT * FROM table' postgresql start doing something and return first \nrow after 10s (for 30k rows) and after 5min (500k rows). It looks like \nit copy whole response to temp space and after that it shows it.\n I don't know why. I tested same table structure and datas on Oracle \nand MSSQL and both returned first row immediatly.\n Have someone any idea?\n\n\t\t\t\t\t\t\tJirka Novak\n\n", "msg_date": "Mon, 11 Nov 2002 11:08:08 +0100", "msg_from": "Jirka Novak <[email protected]>", "msg_from_op": true, "msg_subject": "Slow response from 'SELECT * FROM table'" }, { "msg_contents": "hi,\n\ndo you really need all 500k records? if not i'd suggest using limit and\noffset clause (select * from table order by xy limit 100 - xy should be\nindexed...) or if you really need all records use a cursor.\n\nkuba\n\nOn Mon, 11 Nov 2002, Jirka Novak wrote:\n\n> Hi,\n>\n> I have table with 30 columns and 30000..500000 rows. When I make\n> 'SELECT * FROM table' postgresql start doing something and return first\n> row after 10s (for 30k rows) and after 5min (500k rows). It looks like\n> it copy whole response to temp space and after that it shows it.\n> I don't know why. I tested same table structure and datas on Oracle\n> and MSSQL and both returned first row immediatly.\n> Have someone any idea?\n>\n> \t\t\t\t\t\t\tJirka Novak\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Mon, 11 Nov 2002 11:13:01 +0100 (CET)", "msg_from": "Jakub Ouhrabka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow response from 'SELECT * FROM table'" }, { "msg_contents": "I am curious, what performance difference does it make to use vanilla \nSELECT with to use cursor (for retrieving the entire records)?\n\nThanks\n\nWei\n\nAt 11:13 AM 11/11/2002 +0100, Jakub Ouhrabka wrote:\n>hi,\n>\n>do you really need all 500k records? if not i'd suggest using limit and\n>offset clause (select * from table order by xy limit 100 - xy should be\n>indexed...) or if you really need all records use a cursor.\n>\n>kuba\n>\n>On Mon, 11 Nov 2002, Jirka Novak wrote:\n>\n> > Hi,\n> >\n> > I have table with 30 columns and 30000..500000 rows. When I make\n> > 'SELECT * FROM table' postgresql start doing something and return first\n> > row after 10s (for 30k rows) and after 5min (500k rows). It looks like\n> > it copy whole response to temp space and after that it shows it.\n> > I don't know why. I tested same table structure and datas on Oracle\n> > and MSSQL and both returned first row immediatly.\n> > Have someone any idea?\n> >\n> > Jirka Novak\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to [email protected] so that your\n>message can get through to the mailing list cleanly\n\n\n", "msg_date": "Mon, 11 Nov 2002 12:19:47 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow response from 'SELECT * FROM table'" }, { "msg_contents": "Jirka,\n\n> I have table with 30 columns and 30000..500000 rows. When I make \n> 'SELECT * FROM table' postgresql start doing something and return\n> first row after 10s (for 30k rows) and after 5min (500k rows). It\n> looks like it copy whole response to temp space and after that it\n> shows it.\n> I don't know why. I tested same table structure and datas on Oracle\n> \n> and MSSQL and both returned first row immediatly.\n> Have someone any idea?\n\nI can think of any number of reasons why. However, I need more detail\nfrom you:\n\n1) Why are you selecting 500,000 rows at once?\n\n2) Is \"SELECT * FROM table_a\" the entirety of your query, or was there\nmore to it than that?\n\n3) Are you talking about PSQL, or some other interface?\n\n-Josh Berkus\n\n\n", "msg_date": "Mon, 11 Nov 2002 09:28:04 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow response from 'SELECT * FROM table'" }, { "msg_contents": "On Mon, Nov 11, 2002 at 12:19:47PM -0500, Wei Weng wrote:\n> I am curious, what performance difference does it make to use vanilla \n> SELECT with to use cursor (for retrieving the entire records)?\n\nIf you use a cursor, you don't need to buffer the entire record set\nbefore returning it.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 11 Nov 2002 12:59:50 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow response from 'SELECT * FROM table'" }, { "msg_contents": "Thanks all,\n\n cursor resolved this problem. I thinked that queries are rewriten \ninto implicit cursor, so I didn't use it for query. Now I see, I was wrong.\n\n\t\t\t\t\t\tJirka Novak\n\n", "msg_date": "Tue, 12 Nov 2002 08:30:05 +0100", "msg_from": "Jirka Novak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow response from 'SELECT * FROM table'" } ]
[ { "msg_contents": "pgsql-performers,\n\nJust out of curiosity, anybody with any ideas on what happens to this\nquery when the limit is 59626? It's as though 59626 = infinity?\n\npganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\ntstart,time_stamp limit 59624;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..160328.37 rows=59624 width=179) (actual\ntime=14.52..2225.16 rows=59624 loops=1)\n -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\nwidth=179) (actual time=14.51..2154.59 rows=59625 loops=1)\nTotal runtime: 2265.93 msec\n\nEXPLAIN\npganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\ntstart,time_stamp limit 59625;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..160331.06 rows=59625 width=179) (actual\ntime=0.45..2212.19 rows=59625 loops=1)\n -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\nwidth=179) (actual time=0.45..2140.87 rows=59626 loops=1)\nTotal runtime: 2254.50 msec\n\nEXPLAIN\npganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\ntstart,time_stamp limit 59626;\nNOTICE: QUERY PLAN:\n\nLimit (cost=160332.32..160332.32 rows=59626 width=179) (actual\ntime=37359.41..37535.85 rows=59626 loops=1)\n -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\ntime=37359.40..37471.07 rows=59627 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n(actual time=0.26..12433.00 rows=327960 loops=1)\nTotal runtime: 38477.39 msec\n\nEXPLAIN\npganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\ntstart,time_stamp limit 59627;\nNOTICE: QUERY PLAN:\n\nLimit (cost=160332.32..160332.32 rows=59627 width=179) (actual\ntime=38084.85..38260.88 rows=59627 loops=1)\n -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\ntime=38084.83..38194.63 rows=59628 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n(actual time=0.15..12174.74 rows=327960 loops=1)\nTotal runtime: 38611.83 msec\n\nEXPLAIN\n\npganalysis=> \\d ps2\n Table \"ps2\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------\n host | character varying(255) |\n pid | integer |\n line | integer |\n time_stamp | timestamp with time zone |\n seq | integer |\n cpu_sys | real |\n cpu_elapsed | real |\n cpu_user | real |\n cpu_syst | real |\n cpu_usert | real |\n mssp | integer |\n sigp | integer |\n msrt | integer |\n msst | integer |\n sigt | integer |\n msrp | integer |\n swap | integer |\n swat | integer |\n recp | integer |\n rect | integer |\n pgfp | integer |\n pgft | integer |\n icsp | integer |\n vcst | integer |\n icst | integer |\n vcsp | integer |\n fsbop | integer |\n fsbos | integer |\n fsbip | integer |\n fsbis | integer |\n dread | integer |\n dwrit | integer |\n sbhr | real |\n sread | integer |\n swrit | integer |\n lbhr | real |\n lread | integer |\n lwrit | integer |\n dbuser | character(8) |\n tstart | timestamp with time zone |\nIndexes: ps2_idx\n\npganalysis=> \\d ps2_idx\n Index \"ps2_idx\"\n Column | Type\n------------+--------------------------\n tstart | timestamp with time zone\n time_stamp | timestamp with time zone\nbtree\n\npganalysis=>\n\npsql (PostgreSQL) 7.2\ncontains support for: readline, history, multibyte\n\n\nPlatform: Celeron 1.3GHz, 512MB 40GB IDE hard disk, Linux 2.4.8-26mdk\nkernel\n\nRegards,\n\nMike\n\n\n", "msg_date": "12 Nov 2002 04:44:44 +1100", "msg_from": "Mike Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance discontinuity" }, { "msg_contents": "\nMike,\n\n> Just out of curiosity, anybody with any ideas on what happens to this\n> query when the limit is 59626? It's as though 59626 = infinity?\n\nI'd suspect that this size has something to do with your system resources. \nhave you tried this test on other hardware?\n\nBTW, my experience is that Celerons are dog-slow at anything involveing large \nor complex queries. Something to do with the crippled cache, I think.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Nov 2002 11:14:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "On 12 Nov 2002, Mike Nielsen wrote:\n\n> Just out of curiosity, anybody with any ideas on what happens to this\n> query when the limit is 59626? It's as though 59626 = infinity?\n\n> EXPLAIN\n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59625;\n> NOTICE: QUERY PLAN:\n>\n> Limit (cost=0.00..160331.06 rows=59625 width=179) (actual\n> time=0.45..2212.19 rows=59625 loops=1)\n> -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n> width=179) (actual time=0.45..2140.87 rows=59626 loops=1)\n> Total runtime: 2254.50 msec\n>\n> EXPLAIN\n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59626;\n> NOTICE: QUERY PLAN:\n>\n> Limit (cost=160332.32..160332.32 rows=59626 width=179) (actual\n> time=37359.41..37535.85 rows=59626 loops=1)\n> -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\n> time=37359.40..37471.07 rows=59627 loops=1)\n> -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n> (actual time=0.26..12433.00 rows=327960 loops=1)\n> Total runtime: 38477.39 msec\n\nThis is apparently the breakpoint at which the sequence scan/sort/limit\nmax cost seems to become lower than indexscan/limit given the small\ndifference in estimated costs. What do you get with limit 59626 and\nenable_seqscan=off? My guess is that it's just above the 160332.32\nestimated here.\nI believe that the query is using the index to avoid a sort, but\npossibly/probably not to do the condition. I'd wonder if analyzing with\nmore buckets might get it a better idea, but I really don't know.\nAnother option is to see what making an index on (time_stamp, tstart)\ngives you, but if most of the table meets the time_stamp condition,\nthat wouldn't help any.\n\n", "msg_date": "Mon, 11 Nov 2002 11:37:18 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n>> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n>> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n>> tstart,time_stamp limit 59625;\n>> NOTICE: QUERY PLAN:\n>> \n>> Limit (cost=0.00..160331.06 rows=59625 width=179) (actual\n>> time=0.45..2212.19 rows=59625 loops=1)\n>> -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n>> width=179) (actual time=0.45..2140.87 rows=59626 loops=1)\n>> Total runtime: 2254.50 msec\n\n> I believe that the query is using the index to avoid a sort, but\n> possibly/probably not to do the condition.\n\nCertainly not to do the condition, because <> is not an indexable\noperator. Would it be possible to express the tstart condition as\ntstart > '2000-1-1 00:00:00' ?\n\nThe other thing that's pretty obvious is that the cost of the indexscan\nplan is drastically overestimated relative to the seqscan/sort plan.\nIt might be worth experimenting with lowering random_page_cost to see\nif that helps. I'm also curious to know whether the table is likely to\nbe nearly in order by tstart/time_stamp --- we know that the effects\nof index-order correlation aren't modeled very well in 7.2.\n\nFinally, it might be worth increasing sort_mem, if it's at the default\npresently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Nov 2002 20:39:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity " }, { "msg_contents": "Stephan, Tom & Josh:\n\nHere's the result I get from changing the <> to a > in the tstart\ncondition (no improvement):\n\npganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\npganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\npganalysis-> tstart,time_stamp limit 59628;\nNOTICE: QUERY PLAN:\n\nLimit (cost=160313.27..160313.27 rows=59628 width=179) (actual\ntime=42269.87..42709.82 rows=59628 loops=1)\n -> Sort (cost=160313.27..160313.27 rows=327895 width=179) (actual\ntime=42269.86..42643.74 rows=59629 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.40 rows=327895 width=179)\n(actual time=0.15..15211.49 rows=327960 loops=1)\nTotal runtime: 43232.53 msec\n\nEXPLAIN\n\nSetting enable_seqscan=off produced a good result:\n\n\npganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\npganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\npganalysis-> tstart,time_stamp limit 59628;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..160322.87 rows=59628 width=179) (actual\ntime=40.39..2222.06 rows=59628 loops=1)\n -> Index Scan using ps2_idx on ps2 (cost=0.00..881616.45 rows=327895\nwidth=179) (actual time=40.38..2151.38 rows=59629 loops=1)\nTotal runtime: 2262.23 msec\n\nEXPLAIN\n\nThe ps2 table is in time_stamp order, but the tstarts aren't quite as\ngood -- they're mostly there, but they're computed by subtracting a\n(stochastic) value from time_stamp.\n\nI haven't tinkered with sort_mem yet, but will once I've got this little\nproject wrapped up (1 or 2 days to go!).\n\nThis, by the way, is pg log data that I've parsed up so I can do some\nperformance number-crunching for a client of mine. Is there a better\nway to get comprehensive, per-query read, write and cache hit data? As\nyou can imagine, with millions of records, my client-side perl script\nfor the parsing is slow. I've tried my hand at writing an aggregate\nfunction on the server side using lex and yacc, but decided that, at\nleast for this project, I'd rather let the machine do the head-banging\n-- I can tokenize the raw syslog data (loaded into another pg table)\ninto an intermediate result in an aggregate function, and my parser\nworks on the token strings, but the perl script finished before I could\ngo any further...\n\nIn the off chance, however, that I get invited to more of this kind of\nwork, it would be really handy to be able to get the data without all\nthis hassle! Any clues would be gratefully received.\n\nRegards,\n\nMike\n\n\nOn Tue, 2002-11-12 at 04:44, Mike Nielsen wrote:\n pgsql-performers,\n \n Just out of curiosity, anybody with any ideas on what happens to this\n query when the limit is 59626? It's as though 59626 = infinity?\n \n pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n tstart,time_stamp limit 59624;\n NOTICE: QUERY PLAN:\n \n Limit (cost=0.00..160328.37 rows=59624 width=179) (actual\n time=14.52..2225.16 rows=59624 loops=1)\n -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n width=179) (actual time=14.51..2154.59 rows=59625 loops=1)\n Total runtime: 2265.93 msec\n \n EXPLAIN\n pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n tstart,time_stamp limit 59625;\n NOTICE: QUERY PLAN:\n \n Limit (cost=0.00..160331.06 rows=59625 width=179) (actual\n time=0.45..2212.19 rows=59625 loops=1)\n -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n width=179) (actual time=0.45..2140.87 rows=59626 loops=1)\n Total runtime: 2254.50 msec\n \n EXPLAIN\n pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n tstart,time_stamp limit 59626;\n NOTICE: QUERY PLAN:\n \n Limit (cost=160332.32..160332.32 rows=59626 width=179) (actual\n time=37359.41..37535.85 rows=59626 loops=1)\n -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\n time=37359.40..37471.07 rows=59627 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n (actual time=0.26..12433.00 rows=327960 loops=1)\n Total runtime: 38477.39 msec\n \n EXPLAIN\n pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n tstart,time_stamp limit 59627;\n NOTICE: QUERY PLAN:\n \n Limit (cost=160332.32..160332.32 rows=59627 width=179) (actual\n time=38084.85..38260.88 rows=59627 loops=1)\n -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\n time=38084.83..38194.63 rows=59628 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n (actual time=0.15..12174.74 rows=327960 loops=1)\n Total runtime: 38611.83 msec\n \n EXPLAIN\n \n pganalysis=> \\d ps2\n Table \"ps2\"\n Column | Type | Modifiers\n -------------+--------------------------+-----------\n host | character varying(255) |\n pid | integer |\n line | integer |\n time_stamp | timestamp with time zone |\n seq | integer |\n cpu_sys | real |\n cpu_elapsed | real |\n cpu_user | real |\n cpu_syst | real |\n cpu_usert | real |\n mssp | integer |\n sigp | integer |\n msrt | integer |\n msst | integer |\n sigt | integer |\n msrp | integer |\n swap | integer |\n swat | integer |\n recp | integer |\n rect | integer |\n pgfp | integer |\n pgft | integer |\n icsp | integer |\n vcst | integer |\n icst | integer |\n vcsp | integer |\n fsbop | integer |\n fsbos | integer |\n fsbip | integer |\n fsbis | integer |\n dread | integer |\n dwrit | integer |\n sbhr | real |\n sread | integer |\n swrit | integer |\n lbhr | real |\n lread | integer |\n lwrit | integer |\n dbuser | character(8) |\n tstart | timestamp with time zone |\n Indexes: ps2_idx\n \n pganalysis=> \\d ps2_idx\n Index \"ps2_idx\"\n Column | Type\n ------------+--------------------------\n tstart | timestamp with time zone\n time_stamp | timestamp with time zone\n btree\n \n pganalysis=>\n \n psql (PostgreSQL) 7.2\n contains support for: readline, history, multibyte\n \n \n Platform: Celeron 1.3GHz, 512MB 40GB IDE hard disk, Linux 2.4.8-26mdk\n kernel\n \n Regards,\n \n Mike\n \n \n \n ---------------------------(end of broadcast)---------------------------\n TIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "12 Nov 2002 15:10:31 +1100", "msg_from": "Mike Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "Given the estimated costs, PostgreSQL is doing the right things.\n\nHowever, in your case, it doesn't appear that the estimations are\nrealistic. Index scans are much cheaper than advertised.\n\nTry setting your random_page_cost lower (1.5 to 2 rather than 4).\nBumping sortmem to 32 or 64MB (if plenty of ram is available) will help\nmost situations.\n\nMight see the 'pg_autotune' project for assistance in picking good\nvalues. \n\nhttp://gborg.postgresql.org/project/pgautotune/projdisplay.php\n\n\nOn Mon, 2002-11-11 at 23:10, Mike Nielsen wrote:\n> Stephan, Tom & Josh:\n> \n> Here's the result I get from changing the <> to a > in the tstart\n> condition (no improvement):\n> \n> pganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\n> pganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> pganalysis-> tstart,time_stamp limit 59628;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=160313.27..160313.27 rows=59628 width=179) (actual\n> time=42269.87..42709.82 rows=59628 loops=1)\n> -> Sort (cost=160313.27..160313.27 rows=327895 width=179) (actual\n> time=42269.86..42643.74 rows=59629 loops=1)\n> -> Seq Scan on ps2 (cost=0.00..13783.40 rows=327895 width=179)\n> (actual time=0.15..15211.49 rows=327960 loops=1)\n> Total runtime: 43232.53 msec\n> \n> EXPLAIN\n> \n> Setting enable_seqscan=off produced a good result:\n> \n> \n> pganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\n> pganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> pganalysis-> tstart,time_stamp limit 59628;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..160322.87 rows=59628 width=179) (actual\n> time=40.39..2222.06 rows=59628 loops=1)\n> -> Index Scan using ps2_idx on ps2 (cost=0.00..881616.45 rows=327895\n> width=179) (actual time=40.38..2151.38 rows=59629 loops=1)\n> Total runtime: 2262.23 msec\n> \n> EXPLAIN\n> \n> The ps2 table is in time_stamp order, but the tstarts aren't quite as\n> good -- they're mostly there, but they're computed by subtracting a\n> (stochastic) value from time_stamp.\n> \n> I haven't tinkered with sort_mem yet, but will once I've got this little\n> project wrapped up (1 or 2 days to go!).\n> \n> This, by the way, is pg log data that I've parsed up so I can do some\n> performance number-crunching for a client of mine. Is there a better\n> way to get comprehensive, per-query read, write and cache hit data? As\n> you can imagine, with millions of records, my client-side perl script\n> for the parsing is slow. I've tried my hand at writing an aggregate\n> function on the server side using lex and yacc, but decided that, at\n> least for this project, I'd rather let the machine do the head-banging\n> -- I can tokenize the raw syslog data (loaded into another pg table)\n> into an intermediate result in an aggregate function, and my parser\n> works on the token strings, but the perl script finished before I could\n> go any further...\n> \n> In the off chance, however, that I get invited to more of this kind of\n> work, it would be really handy to be able to get the data without all\n> this hassle! Any clues would be gratefully received.\n> \n> Regards,\n> \n> Mike\n> \n> \n> On Tue, 2002-11-12 at 04:44, Mike Nielsen wrote:\n> pgsql-performers,\n> \n> Just out of curiosity, anybody with any ideas on what happens to this\n> query when the limit is 59626? It's as though 59626 = infinity?\n> \n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59624;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..160328.37 rows=59624 width=179) (actual\n> time=14.52..2225.16 rows=59624 loops=1)\n> -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n> width=179) (actual time=14.51..2154.59 rows=59625 loops=1)\n> Total runtime: 2265.93 msec\n> \n> EXPLAIN\n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59625;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..160331.06 rows=59625 width=179) (actual\n> time=0.45..2212.19 rows=59625 loops=1)\n> -> Index Scan using ps2_idx on ps2 (cost=0.00..881812.85 rows=327935\n> width=179) (actual time=0.45..2140.87 rows=59626 loops=1)\n> Total runtime: 2254.50 msec\n> \n> EXPLAIN\n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59626;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=160332.32..160332.32 rows=59626 width=179) (actual\n> time=37359.41..37535.85 rows=59626 loops=1)\n> -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\n> time=37359.40..37471.07 rows=59627 loops=1)\n> -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n> (actual time=0.26..12433.00 rows=327960 loops=1)\n> Total runtime: 38477.39 msec\n> \n> EXPLAIN\n> pganalysis=> explain analyze select * from ps2 where tstart<> '2000-1-1\n> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> tstart,time_stamp limit 59627;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=160332.32..160332.32 rows=59627 width=179) (actual\n> time=38084.85..38260.88 rows=59627 loops=1)\n> -> Sort (cost=160332.32..160332.32 rows=327935 width=179) (actual\n> time=38084.83..38194.63 rows=59628 loops=1)\n> -> Seq Scan on ps2 (cost=0.00..13783.52 rows=327935 width=179)\n> (actual time=0.15..12174.74 rows=327960 loops=1)\n> Total runtime: 38611.83 msec\n> \n> EXPLAIN\n> \n> pganalysis=> \\d ps2\n> Table \"ps2\"\n> Column | Type | Modifiers\n> -------------+--------------------------+-----------\n> host | character varying(255) |\n> pid | integer |\n> line | integer |\n> time_stamp | timestamp with time zone |\n> seq | integer |\n> cpu_sys | real |\n> cpu_elapsed | real |\n> cpu_user | real |\n> cpu_syst | real |\n> cpu_usert | real |\n> mssp | integer |\n> sigp | integer |\n> msrt | integer |\n> msst | integer |\n> sigt | integer |\n> msrp | integer |\n> swap | integer |\n> swat | integer |\n> recp | integer |\n> rect | integer |\n> pgfp | integer |\n> pgft | integer |\n> icsp | integer |\n> vcst | integer |\n> icst | integer |\n> vcsp | integer |\n> fsbop | integer |\n> fsbos | integer |\n> fsbip | integer |\n> fsbis | integer |\n> dread | integer |\n> dwrit | integer |\n> sbhr | real |\n> sread | integer |\n> swrit | integer |\n> lbhr | real |\n> lread | integer |\n> lwrit | integer |\n> dbuser | character(8) |\n> tstart | timestamp with time zone |\n> Indexes: ps2_idx\n> \n> pganalysis=> \\d ps2_idx\n> Index \"ps2_idx\"\n> Column | Type\n> ------------+--------------------------\n> tstart | timestamp with time zone\n> time_stamp | timestamp with time zone\n> btree\n> \n> pganalysis=>\n> \n> psql (PostgreSQL) 7.2\n> contains support for: readline, history, multibyte\n> \n> \n> Platform: Celeron 1.3GHz, 512MB 40GB IDE hard disk, Linux 2.4.8-26mdk\n> kernel\n> \n> Regards,\n> \n> Mike\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n-- \n Rod Taylor\n\n", "msg_date": "11 Nov 2002 23:51:19 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "Mike,\n\n> Given the estimated costs, PostgreSQL is doing the right things.\n> \n> However, in your case, it doesn't appear that the estimations are\n> realistic. Index scans are much cheaper than advertised.\n\nCan I assume that you've run VACUUM FULL ANALYZE on the table, or\npreferably the whole database?\n\n> \n> Try setting your random_page_cost lower (1.5 to 2 rather than 4).\n> Bumping sortmem to 32 or 64MB (if plenty of ram is available) will\n> help\n> most situations.\n> \n> Might see the 'pg_autotune' project for assistance in picking good\n> values. \n> \n> http://gborg.postgresql.org/project/pgautotune/projdisplay.php\n\nUm. I don't think we have anything to advertise yet, for pg_autotune.\n It's still very much an alpha, and the limits we set are pretty\narbitrary.\n\n-Josh Berkus\n", "msg_date": "Tue, 12 Nov 2002 08:57:43 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "Hi, Josh.\n\nYes, I'd run a VACUUM FULL ANALYZE -- I did it again just to make sure,\nand re-ran the query (similar result):\n\npganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\npganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\npganalysis-> tstart,time_stamp limit 59628;\nNOTICE: QUERY PLAN:\n\nLimit (cost=160313.27..160313.27 rows=59628 width=179) (actual\ntime=45405.47..46320.12 rows=59628 loops=1)\n -> Sort (cost=160313.27..160313.27 rows=327895 width=179) (actual\ntime=45405.46..46248.31 rows=59629 loops=1)\n -> Seq Scan on ps2 (cost=0.00..13783.40 rows=327895 width=179)\n(actual time=13.52..17111.66 rows=327960 loops=1)\nTotal runtime: 46894.21 msec\n\nEXPLAIN\n\nUnfortunately, I have not yet had time to experiment with twiddling the\nquery optimizer parameters or memory -- my apologies for this, but,\nwell, a guy's gotta eat...\n\nRegards,\n\nMike\n\nOn Wed, 2002-11-13 at 03:57, Josh Berkus wrote:\n> Mike,\n> \n> > Given the estimated costs, PostgreSQL is doing the right things.\n> > \n> > However, in your case, it doesn't appear that the estimations are\n> > realistic. Index scans are much cheaper than advertised.\n> \n> Can I assume that you've run VACUUM FULL ANALYZE on the table, or\n> preferably the whole database?\n> \n> > \n> > Try setting your random_page_cost lower (1.5 to 2 rather than 4).\n> > Bumping sortmem to 32 or 64MB (if plenty of ram is available) will\n> > help\n> > most situations.\n> > \n> > Might see the 'pg_autotune' project for assistance in picking good\n> > values. \n> > \n> > http://gborg.postgresql.org/project/pgautotune/projdisplay.php\n> \n> Um. I don't think we have anything to advertise yet, for pg_autotune.\n> It's still very much an alpha, and the limits we set are pretty\n> arbitrary.\n> \n> -Josh Berkus\n-- \nMike Nielsen <[email protected]>\n\n", "msg_date": "13 Nov 2002 11:04:01 +1100", "msg_from": "Mike Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "\nMike,\n\n> Yes, I'd run a VACUUM FULL ANALYZE -- I did it again just to make sure,\n> and re-ran the query (similar result):\n> \n> pganalysis=> explain analyze select * from ps2 where tstart> '2000-1-1\n> pganalysis'> 00:00:00' and time_stamp > '2000-1-1 00:00:00' order by\n> pganalysis-> tstart,time_stamp limit 59628;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=160313.27..160313.27 rows=59628 width=179) (actual\n> time=45405.47..46320.12 rows=59628 loops=1)\n> -> Sort (cost=160313.27..160313.27 rows=327895 width=179) (actual\n> time=45405.46..46248.31 rows=59629 loops=1)\n> -> Seq Scan on ps2 (cost=0.00..13783.40 rows=327895 width=179)\n> (actual time=13.52..17111.66 rows=327960 loops=1)\n> Total runtime: 46894.21 msec\n> \n> EXPLAIN\n> \n> Unfortunately, I have not yet had time to experiment with twiddling the\n> query optimizer parameters or memory -- my apologies for this, but,\n> well, a guy's gotta eat...\n\nWell, I''ll just concur with what others have said: for some reason, the \nparser is slightly underestimateing the cost of a seq scan, and dramatically \noverestimating the cost of an index scan, for this query. \n\nOther than tweaking to parser calculation values, I'd suggest dropping and \nre-building the index just for thouroughness.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 13 Nov 2002 13:45:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "On Fri, 15 Nov 2002 03:26:32 +0000 (UTC), in\ncomp.databases.postgresql.performance you wrote:\n\n> -> Seq Scan on ps2 (cost=0.00..13783.40 rows=327895 width=179)\n ^^^^^\n>(actual time=0.15..15211.49 rows=327960 loops=1)\n>\n> -> Index Scan using ps2_idx on ps2 (cost=0.00..881616.45 rows=327895\n ^^^^^^\n>width=179) (actual time=40.38..2151.38 rows=59629 loops=1)\n ^^^^\n>\n>The ps2 table is in time_stamp order, but the tstarts aren't quite as\n>good -- they're mostly there, but they're computed by subtracting a\n>(stochastic) value from time_stamp.\n\nMike,\n\nthis is the well known \"divide correlation by number of index columns\"\neffect. This effect can be masked to a certain degree by reducing\nrandom_page_cost, as has already been suggested.\n\nThe estimated index scan cost is also influenced by\neffective_cache_size; its default value is 1000. Try\n\n\tSET effective_cache_size = 50000;\n\nThis should help a bit, but please don't expect a big effect.\n\nI'm running Postgres 7.2 with a modified index cost estimator here.\nThe patch is at http://www.pivot.at/pg/16-correlation.diff\n\nThis patch gives you two new GUC variables.\n\nindex_cost_algorithm: allows you to select between different methods\nof interpolating between best case and worst case. 0 is the standard\nbehavior (before the patch), 1 to 4 tend more and more towards lower\nindex scan costs. See the switch statement in costsize.c for details.\nDefault = 3.\n\nsecondary_correlation: is a factor that is used to reduce the\ncorrelation of the first index column a little bit once for each\nadditional index column. Default = 0.95.\n\nWith default settings you should get an index cost estimate between\n20000 and 30000. Which allows you to increase random_page_cost to a\nmore reasonable value of something like 10 or even higher.\n\nIf you try it, please let me know how it works for you.\n\nServus\n Manfred\n", "msg_date": "Fri, 29 Nov 2002 19:24:44 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance discontinuity" }, { "msg_contents": "L.S.\n\nThe query below runs 10-20 times slower under v7.3 than it did under v7.2.3:\n\n- hardware is the same\n- standard install of postgresql, both version had stats-collection enabled\n- v7.2.3 had no multibyte and no locale, obviously v7.3 does\n- *very* recent vacuum analyse\n\n\nI expected some overhead due to the enabled mulitbyte, but not this much.. ;(\n\nBTW, there are a few other queries that are performing *real* slow, but I'm \nhoping this one will give away a cause for the overall problem...\n\n\nCould anybody offer an idea?\n\n\n\ntrial=# explain analyse select foo.*, c.id from\n\t(select *, 't' from lijst01_table union all \n\tselect *, 't' from lijst02_table union all \n\tselect *, 'f' from lijst03_table union all \n\tselect *, 'f' from lijst04_table union all \n\tselect *, 't' from lijst04b_table ) as foo\n\tinner join creditor c \n\t\ton foo.dflt_creditor_id = c.old_creditor_id\n\t order by old_id;\n\n* foo.dflt_creditor_id is of type varchar(20)\n* c.old_creditor_id is of type text\n\n\nThe plan below shows something weird is happening during the join, but I can't \nexplain it.\n\n\nTIA,\n\n\n\n\n\nFrank.\n\n\n\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=54103.74..54116.18 rows=4976 width=498) (actual \ntime=234595.27..234607.58 rows=4976 loops=1)\n Sort Key: foo.old_id\n -> Nested Loop (cost=0.00..53798.19 rows=4976 width=498) (actual \ntime=7559.20..234476.70 rows=4976 loops=1)\n Join Filter: (\"inner\".dflt_creditor_id = \n(\"outer\".old_creditor_id)::text)\n -> Seq Scan on creditor c (cost=0.00..8.27 rows=227 width=14) \n(actual time=0.05..7.35 rows=227 loops=1)\n -> Subquery Scan foo (cost=0.00..174.76 rows=4976 width=150) \n(actual time=0.25..969.47 rows=4976 loops=227)\n -> Append (cost=0.00..174.76 rows=4976 width=150) (actual \ntime=0.20..658.14 rows=4976 loops=227)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..2.46 rows=46 \nwidth=145) (actual time=0.19..6.26 rows=46 loops=227)\n -> Seq Scan on lijst01_table (cost=0.00..2.46 \nrows=46 width=145) (actual time=0.10..3.40 rows=46 loops=227)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..30.62 \nrows=862 width=150) (actual time=0.16..111.38 rows=862 loops=227)\n -> Seq Scan on lijst02_table (cost=0.00..30.62 \nrows=862 width=150) (actual time=0.09..59.79 rows=862 loops=227)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..48.63 \nrows=1363 width=148) (actual time=0.16..166.98 rows=1363 loops=227)\n -> Seq Scan on lijst03_table (cost=0.00..48.63 \nrows=1363 width=148) (actual time=0.09..87.45 rows=1363 loops=227)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..92.03 \nrows=2703 width=134) (actual time=0.15..338.66 rows=2703 loops=227)\n -> Seq Scan on lijst04_table (cost=0.00..92.03 \nrows=2703 width=134) (actual time=0.09..176.41 rows=2703 loops=227)\n -> Subquery Scan \"*SELECT* 5\" (cost=0.00..1.02 rows=2 \nwidth=134) (actual time=0.16..0.28 rows=2 loops=227)\n -> Seq Scan on lijst04b_table (cost=0.00..1.02 \nrows=2 width=134) (actual time=0.09..0.16 rows=2 loops=227)\n Total runtime: 234624.07 msec\n(18 rows)\n\n\n", "msg_date": "Mon, 2 Dec 2002 16:45:55 +0100", "msg_from": "\"ir. F.T.M. van Vugt bc.\" <[email protected]>", "msg_from_op": false, "msg_subject": "v7.2.3 versus v7.3 -> huge performance penalty for JOIN with UNION" }, { "msg_contents": "On Mon, Dec 02, 2002 at 04:45:55PM +0100, ir. F.T.M. van Vugt bc. wrote:\n> L.S.\n> \n> The query below runs 10-20 times slower under v7.3 than it did under v7.2.3:\n\n> - v7.2.3 had no multibyte and no locale, obviously v7.3 does\n\nAre you using the C locale? If it was not enabled in 7.2.3, I\nbelieve it was using C anyway; if you have some other locale, it's\nnow getting picked up, and that might be the source of the slower\nperformance (?). \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 2 Dec 2002 11:00:12 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n UNION" }, { "msg_contents": "\"ir. F.T.M. van Vugt bc.\" <[email protected]> writes:\n> The query below runs 10-20 times slower under v7.3 than it did under v7.2.3:\n\nI don't suppose you have explain output for it from 7.2.3?\n\nIt seems strange to me that the thing is picking a nestloop join here.\nEither merge or hash would make more sense ... oh, but wait:\n\n> \tinner join creditor c \n> \t\ton foo.dflt_creditor_id = c.old_creditor_id\n\n> * foo.dflt_creditor_id is of type varchar(20)\n> * c.old_creditor_id is of type text\n\nIIRC, merge and hash only work on plain Vars --- the implicit type\ncoercion from varchar to text is what's putting the kibosh on a more\nintelligent join plan. Can you fix your table declarations to agree\non the datatype? If you don't want to change the tables, another\npossibility is something like\n\n select foo.*, c.id from\n\t(select *, dflt_creditor_id::text as key, 't' from lijst01_table union all \n\tselect *, dflt_creditor_id::text as key, 't' from lijst02_table union all \n\tselect *, dflt_creditor_id::text as key, 'f' from lijst03_table union all \n\tselect *, dflt_creditor_id::text as key, 'f' from lijst04_table union all \n\tselect *, dflt_creditor_id::text as key, 't' from lijst04b_table ) as foo\n\tinner join creditor c \n\t\ton foo.key = c.old_creditor_id\n\t order by old_id;\n\nie, force the type coercion to occur down inside the union, not at the\njoin.\n\nThis doesn't explain the slowdown from 7.2.3, though --- it had the same\ndeficiency. (I am hoping to get around to fixing it for 7.4.)\n\nIt could easy be that --enable-locale explains the slowdown. Are you\nrunning 7.4 in C locale, or something else? Comparisons in locales\nlike en_US can be *way* slower than in C locale. You can use\npg_controldata to check this for sure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Dec 2002 11:13:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n\tUNION" }, { "msg_contents": "Wow, the speed at which you guys are responding never ceases to amaze me !\n\nTL> I don't suppose you have explain output for it from 7.2.3?\n\nNope, sorry 'bout that.\nBTW, the performance comparison was not a 'hard' (measured) number, but a \nwatch-timed conclusion on a complete run of a conversiontool this query is \npart of.\n\nTL> It seems strange to me that the thing is picking a nestloop join here.\nTL> oh, but wait: the implicit type coercion from varchar to text is what's\nTL> putting the kibosh on a more intelligent join plan.\n\nYou're abolutely right, I'm back in business when putting a type coercion \ninside the union:\n\ntrial=# explain select foo.*, c.id from\n<cut>\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------\n Sort (cost=588.66..601.10 rows=4976 width=530)\n Sort Key: foo.old_id\n -> Hash Join (cost=8.84..283.12 rows=4976 width=530)\n Hash Cond: (\"outer\".\"key\" = \"inner\".old_creditor_id)\n -> Subquery Scan foo (cost=0.00..174.76 rows=4976 width=150)\n -> Append (cost=0.00..174.76 rows=4976 width=150)\n <cut>\n\n(as opposed to: (cost=54103.74..54116.18 rows=4976 width=498))\n\n\n\n> This doesn't explain the slowdown from 7.2.3, though --- it had the same\n> deficiency. (I am hoping to get around to fixing it for 7.4.)\n\nMmm, that's weird. Could be caused by somebody over here who has done 'work' \non some queries... ;( => I'll check on that, if I can be absolutely sure the \n7.2.3 version planned *this* query differently, I'll let you know. Sorry \n'bout that....\n\n\nAS> It could easy be that --enable-locale explains the slowdown. Are you\nAS> running 7.4 in C locale, or something else?\n\nOn v7.2.3. I wasn't doing anything with locale.\nThe v7.3 put 'POSIX' into the postgresql.conf file, changing that into 'C' \ndidn't seem to make any difference.\n\nAS> Comparisons in locales like en_US can be *way* slower than in C locale.\nAS> You can use pg_controldata to check this for sure.\n\nO.K. this seems to help a lot as well !\n\nI'll have to take a look at both ISO C and POSIX locale, 'cause I wouldn't \nhave expected it to make such a difference...\n\nOn the original v7.3, pg_controldata returned 'posix', upon changing the \npostgresql.conf it confirmed the change to 'C'. This resulted in:\n\n\nPOSIX_trial=# explain analyse select foo.*, c.id from \n<cut>\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=588.66..601.10 rows=4976 width=530) (actual time=2482.51..2530.54 \nrows=4976 loops=1)\n<cut>\n Total runtime: 2636.15 msec\n\n\nC_trial=# explain analyse select foo.*, c.id from \n<cut>\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=588.66..601.10 rows=4976 width=530) (actual time=1537.05..1549.34 \nrows=4976 loops=1)\n<cut>\n Total runtime: 1567.76 msec\n\n\n\n\nHey, I'm happy ;-)\n\n\n\nThanks a lot !!!\n\n\n\n\n\n\nFrank.\n", "msg_date": "Mon, 2 Dec 2002 18:20:06 +0100", "msg_from": "\"ir. F.T.M. van Vugt bc.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n UNION" } ]
[ { "msg_contents": "Heinrik,\n\n\"So, where do i find and change shmmax shmall settings ??\nWhat should I put there?\n\nWhat is a recommended value for shared buffers in postgresql.conf ?\"\n\nThere is no \"recommended value.\" You have to calculate this relatively:\n\n1) Figure out how much RAM your server has available for PostgreSQL. For \nexample, I have one server on which I allocate 256 mb for Apache, 128 mb for \nlinux, and thus have 512mb available for Postgres.\n\n2) Calculate out the memory settings to use 70% of that amount of Ram in \nregular usage. Please beware that sort_mem is *not* shared, meaning that it \nwill be multiplied by the number of concurrent requests requiring sorting. \nThus, your calculation (in K) should be:\n\n250K +\n8.2K * shared_buffers +\n14.2K * max_connections +\nsort_mem * average number of requests per minute\n=====================================\nmemory available to postgresql in K * 0.7\n\nYou will also have to set SHMMAX and SHMMALL to accept this memory allocation. \nSince shmmax is set in bytes, then I generally feel safe making it:\n1024 * 0.5 * memory available to postgresql in K\n\nSetting them is done simply:\n$ echo 134217728 >/proc/sys/kernel/shmall\n$ echo 134217728 >/proc/sys/kernel/shmmax\n\nThis is all taken from the postgresql documentation, with some experience:\nhttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 12 Nov 2002 12:05:44 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On Tue, 12 Nov 2002, Josh Berkus wrote:\n\n> Heinrik,\n> \n> \"So, where do i find and change shmmax shmall settings ??\n> What should I put there?\n> \n> What is a recommended value for shared buffers in postgresql.conf ?\"\n> \n> There is no \"recommended value.\" You have to calculate this relatively:\n> \n> 1) Figure out how much RAM your server has available for PostgreSQL. For \n> example, I have one server on which I allocate 256 mb for Apache, 128 mb for \n> linux, and thus have 512mb available for Postgres.\n> \n> 2) Calculate out the memory settings to use 70% of that amount of Ram in \n> regular usage. Please beware that sort_mem is *not* shared, meaning that it \n> will be multiplied by the number of concurrent requests requiring sorting. \n> Thus, your calculation (in K) should be:\n> \n> 250K +\n> 8.2K * shared_buffers +\n> 14.2K * max_connections +\n> sort_mem * average number of requests per minute\n> =====================================\n> memory available to postgresql in K * 0.7\n> \n> You will also have to set SHMMAX and SHMMALL to accept this memory allocation. \n> Since shmmax is set in bytes, then I generally feel safe making it:\n> 1024 * 0.5 * memory available to postgresql in K\n> \n> Setting them is done simply:\n> $ echo 134217728 >/proc/sys/kernel/shmall\n> $ echo 134217728 >/proc/sys/kernel/shmmax\n> \n> This is all taken from the postgresql documentation, with some experience:\n> http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html\n\nNote that on RedHat boxes, you can also use the /etc/sysctl.conf file to \ndo this. It is considered the preferred method, and a little less obtuse \nfor beginners.\n\nAs root, run 'sysctl -a' to get a list of all possible system kernel \nsettings. 'sysctl -a | grep shm' will show you all the shared memory \nsettings as they are now. Edit the /etc/sysctl.conf file with the new \nsettings and use 'sysctl -p' to process the new settings. This way you \ndon't have to edit the /etc/rc.d/rc.local file to get the settings you \nwant.\n\nOn the subject of sort_mem, I've found that if your result sets are all \nlarge (say 100+megs each) that as long as your sort mem isn't big enough \nto hold the whole result set, the performance difference is negligable. \nI.e. going from 4 meg to 16 meg of sort_mem for a 100 Meg result set \ndoesn't seem to help much at all. In fact, in some circumstances, it \nseems that the smaller number is faster, especially under heavy parallel \nload, since larger settings may result in undesired swapping out of other \nprocesses to allocate memory for sorts. \n\nIn other words, it's faster to sort 20 results in 4 megs each if you \naren't causing swapping out, than it is to sort 20 results in 32 megs \neach if that does cause things to swap out.\n\n", "msg_date": "Tue, 12 Nov 2002 13:26:34 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "Hello Josh!\n\nThis is was I figured out now:\n\n1) RAM available: 1024 MB, there's nothing else but postgres on this\n machine, so if I calculate 128 MB for Linux, there are 896 MB left\n for Postgres.\n\n2) 70 % of 896 MB is 627 MB\n\nNow, if I follow your instructions:\n\n250K +\n8.2K * 128 (shared_buffers) = 1049,6K +\n14.2K * 64 (max_connections) = 908,8K +\n1024K * 5000 (average number of requests per minute) = 5120000K\n===============================================================\n5122208.4K ==> 5002.16 MB\n\nthis is a little bit more than I have available, isn't it? :(((\n\nsure that this has got to be the \"average number of requests per minute\"\nand not \"per second\" ? seems so much, doesn't it?\n\nwhat am I supposed to do now?\n\nthanks again,\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, November 12, 2002 9:05 PM\nSubject: Re: Upgrade to dual processor machine?\n\n\nHeinrik,\n\n\"So, where do i find and change shmmax shmall settings ??\nWhat should I put there?\n\nWhat is a recommended value for shared buffers in postgresql.conf ?\"\n\nThere is no \"recommended value.\" You have to calculate this relatively:\n\n1) Figure out how much RAM your server has available for PostgreSQL. For\nexample, I have one server on which I allocate 256 mb for Apache, 128 mb for\nlinux, and thus have 512mb available for Postgres.\n\n2) Calculate out the memory settings to use 70% of that amount of Ram in\nregular usage. Please beware that sort_mem is *not* shared, meaning that it\nwill be multiplied by the number of concurrent requests requiring sorting.\nThus, your calculation (in K) should be:\n\n250K +\n8.2K * shared_buffers +\n14.2K * max_connections +\nsort_mem * average number of requests per minute\n=====================================\nmemory available to postgresql in K * 0.7\n\nYou will also have to set SHMMAX and SHMMALL to accept this memory allocation.\nSince shmmax is set in bytes, then I generally feel safe making it:\n1024 * 0.5 * memory available to postgresql in K\n\nSetting them is done simply:\n$ echo 134217728 >/proc/sys/kernel/shmall\n$ echo 134217728 >/proc/sys/kernel/shmmax\n\nThis is all taken from the postgresql documentation, with some experience:\nhttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html\n\n--\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n", "msg_date": "Wed, 13 Nov 2002 08:29:25 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On 13 Nov 2002 at 8:29, Henrik Steffen wrote:\n\n> Hello Josh!\n> \n> This is was I figured out now:\n> \n> 1) RAM available: 1024 MB, there's nothing else but postgres on this\n> machine, so if I calculate 128 MB for Linux, there are 896 MB left\n> for Postgres.\n> \n> 2) 70 % of 896 MB is 627 MB\n> \n> Now, if I follow your instructions:\n> \n> 250K +\n> 8.2K * 128 (shared_buffers) = 1049,6K +\n> 14.2K * 64 (max_connections) = 908,8K +\n> 1024K * 5000 (average number of requests per minute) = 5120000K\n> ===============================================================\n> 5122208.4K ==> 5002.16 MB\n> \n> this is a little bit more than I have available, isn't it? :(((\n\nObviously tuning depends upon application and you have to set the threshold by \ntrial and error.\n\nI would suggest following from some recent discussions on such topics.\n\n1)Set shared buffers somewhere between 500-600MB. Tha'ts going to be optimal \nrange for a Gig of RAM.\n\n2) How big you database is? How much of it you need it in memory at any given \ntime? You need to get these figures while setting shared buffers. But still 500-\n600MB seems good because it does not include file system cache and buffers.\n\n3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or \nsort results of a query. If do these things seldomly, you can set this very low \nor default. For individual session that creates index, you can set the sort \nmemory accordingly. Certainly in your case, number of requests per minute are \nhigh but if you are not creating any index/sorting in each query, you can leave \nthe default as it is..\n\nHTH\n\nBye\n Shridhar\n\n--\nAnother dream that failed. There's nothing sadder.\t\t-- Kirk, \"This side of \nParadise\", stardate 3417.3\n\n", "msg_date": "Wed, 13 Nov 2002 13:23:36 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "\nHello Shridhar,\n\nthanks for your answer...\n\n1) in the docs it says: shared_buffers should be 2*max_connections, min 16.\nnow, you suggest to put it to 500-600 MB, which means I will have to\nincrease shared_buffers to 68683 -- is this really correct? I mean,\nRAM is allready now almost totally consumed.\n\n2) the database has a size of 3.6 GB at the moment... about 100 user tables.\n\n3) ok, I understand: I am not creating any indexes usually. Only once at night\nall user indexes are dropped and recreated, I could imagine to increase the\nsort_mem for this script... so sort_mem with 1024K is ok, or should it be\nlowered to, say, 512K ?\n\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, November 13, 2002 8:53 AM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> On 13 Nov 2002 at 8:29, Henrik Steffen wrote:\n>\n> > Hello Josh!\n> >\n> > This is was I figured out now:\n> >\n> > 1) RAM available: 1024 MB, there's nothing else but postgres on this\n> > machine, so if I calculate 128 MB for Linux, there are 896 MB left\n> > for Postgres.\n> >\n> > 2) 70 % of 896 MB is 627 MB\n> >\n> > Now, if I follow your instructions:\n> >\n> > 250K +\n> > 8.2K * 128 (shared_buffers) = 1049,6K +\n> > 14.2K * 64 (max_connections) = 908,8K +\n> > 1024K * 5000 (average number of requests per minute) = 5120000K\n> > ===============================================================\n> > 5122208.4K ==> 5002.16 MB\n> >\n> > this is a little bit more than I have available, isn't it? :(((\n>\n> Obviously tuning depends upon application and you have to set the threshold by\n> trial and error.\n>\n> I would suggest following from some recent discussions on such topics.\n>\n> 1)Set shared buffers somewhere between 500-600MB. Tha'ts going to be optimal\n> range for a Gig of RAM.\n>\n> 2) How big you database is? How much of it you need it in memory at any given\n> time? You need to get these figures while setting shared buffers. But still 500-\n> 600MB seems good because it does not include file system cache and buffers.\n>\n> 3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or\n> sort results of a query. If do these things seldomly, you can set this very low\n> or default. For individual session that creates index, you can set the sort\n> memory accordingly. Certainly in your case, number of requests per minute are\n> high but if you are not creating any index/sorting in each query, you can leave\n> the default as it is..\n>\n> HTH\n>\n> Bye\n> Shridhar\n>\n> --\n> Another dream that failed. There's nothing sadder. -- Kirk, \"This side of\n> Paradise\", stardate 3417.3\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Wed, 13 Nov 2002 09:14:03 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On 13 Nov 2002 at 9:14, Henrik Steffen wrote:\n> 1) in the docs it says: shared_buffers should be 2*max_connections, min 16.\n> now, you suggest to put it to 500-600 MB, which means I will have to\n> increase shared_buffers to 68683 -- is this really correct? I mean,\n> RAM is allready now almost totally consumed.\n\nYes. 2*max connection is minimum. Anything additional is always welcome as long \nas it does not starve the system.\n\nIf you have a gig of memory and shared buffers are 536MB as you have indicated, \nwho is taking rest of the RAM? \n\nWhat are your current settings? Could you please repost. I lost earlier \nthread(Sorry for that.. Had a HDD meltdown here couple of days back. Lost few \nmails..)\n \n> 2) the database has a size of 3.6 GB at the moment... about 100 user tables.\n\n500-600MB would take you comfortably in this case..\n\n> 3) ok, I understand: I am not creating any indexes usually. Only once at night\n> all user indexes are dropped and recreated, I could imagine to increase the\n> sort_mem for this script... so sort_mem with 1024K is ok, or should it be\n> lowered to, say, 512K ?\n\nThat actually depends upons size of table you are indexing and time you can \nallow for indexing. Default is 4 MB. I would something like 32MB should help a \nlot..\n\nHTH\n\nBye\n Shridhar\n\n--\nQOTD:\t\"It seems to me that your antenna doesn't bring in too many\tstations \nanymore.\"\n\n", "msg_date": "Wed, 13 Nov 2002 13:56:23 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "\ndear shridhar,\n \n> Yes. 2*max connection is minimum. Anything additional is always welcome as long \n> as it does not starve the system.\n\nok, I tried to set shared_buffers to 65535 now. but then restarting postgres\nfails - it says: \n\nIpcMemoryCreate: shmget(key=5432001, size=545333248, 03600) failed: Invalid argument\n\nand a message telling me to either lower the shared_buffers or raise the\nSHMMAX. \n\n> If you have a gig of memory and shared buffers are 536MB as you have indicated, \n> who is taking rest of the RAM? \n\nwell, I guess it's postgres... see the output of top below:\n\n 11:06am up 1 day, 16:46, 1 user, load average: 1,32, 1,12, 1,22\n53 processes: 52 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: 24,5% user, 11,2% system, 0,0% nice, 5,6% idle\nMem: 1020808K av, 1006156K used, 14652K free, 8520K shrd, 37204K buff\nSwap: 1028112K av, 60K used, 1028052K free 849776K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n10678 root 19 0 2184 2184 1584 S 2,9 0,2 0:00 sendmail\n 1 root 8 0 520 520 452 S 0,0 0,0 0:03 init\n 2 root 9 0 0 0 0 SW 0,0 0,0 0:00 keventd\n 3 root 9 0 0 0 0 SW 0,0 0,0 0:00 kapm-idled\n 4 root 19 19 0 0 0 SWN 0,0 0,0 0:00 ksoftirqd_CPU0\n 5 root 9 0 0 0 0 SW 0,0 0,0 0:28 kswapd\n 6 root 9 0 0 0 0 SW 0,0 0,0 0:00 kreclaimd\n 7 root 9 0 0 0 0 SW 0,0 0,0 0:09 bdflush\n 8 root 9 0 0 0 0 SW 0,0 0,0 0:00 kupdated\n 9 root -1 -20 0 0 0 SW< 0,0 0,0 0:00 mdrecoveryd\n 13 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 136 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 137 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 138 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 139 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 140 root 9 0 0 0 0 SW 0,0 0,0 2:16 kjournald\n 378 root 9 0 0 0 0 SW 0,0 0,0 0:00 eth0\n 454 root 9 0 572 572 476 S 0,0 0,0 0:00 syslogd\n 459 root 9 0 1044 1044 392 S 0,0 0,1 0:00 klogd\n 572 root 8 0 1128 1092 968 S 0,0 0,1 0:07 sshd\n 584 root 9 0 1056 1056 848 S 0,0 0,1 0:02 nlservd\n 611 root 8 0 1836 1820 1288 S 0,0 0,1 0:00 sendmail\n 693 root 9 0 640 640 556 S 0,0 0,0 0:00 crond\n 729 daemon 9 0 472 464 404 S 0,0 0,0 0:00 atd\n 736 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 737 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 738 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 739 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 740 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 741 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 9800 root 9 0 1888 1864 1552 S 0,0 0,1 0:02 sshd\n 9801 root 16 0 1368 1368 1016 S 0,0 0,1 0:00 bash\n10574 postgres 0 0 1448 1448 1380 S 0,0 0,1 0:00 postmaster\n10576 postgres 9 0 1436 1436 1388 S 0,0 0,1 0:00 postmaster\n10577 postgres 9 0 1480 1480 1388 S 0,0 0,1 0:00 postmaster\n10579 postgres 14 0 11500 11M 10324 S 0,0 1,1 0:08 postmaster\n10580 postgres 9 0 11672 11M 10328 S 0,0 1,1 0:03 postmaster\n10581 postgres 14 0 11620 11M 10352 S 0,0 1,1 0:08 postmaster\n10585 postgres 11 0 11560 11M 10304 S 0,0 1,1 0:08 postmaster\n10588 postgres 9 0 11520 11M 10316 S 0,0 1,1 0:14 postmaster\n10589 postgres 9 0 11632 11M 10324 S 0,0 1,1 0:06 postmaster\n10590 postgres 10 0 11620 11M 10320 S 0,0 1,1 0:06 postmaster\n10591 postgres 9 0 11536 11M 10320 S 0,0 1,1 0:08 postmaster\n10592 postgres 11 0 11508 11M 10316 S 0,0 1,1 0:04 postmaster\n10595 postgres 9 0 11644 11M 10324 S 0,0 1,1 0:03 postmaster\n10596 postgres 11 0 11664 11M 10328 S 0,0 1,1 0:08 postmaster\n10597 postgres 9 0 11736 11M 10340 S 0,0 1,1 0:24 postmaster\n10598 postgres 9 0 11500 11M 10312 S 0,0 1,1 0:10 postmaster\n10599 postgres 11 0 11676 11M 10324 S 0,0 1,1 0:13 postmaster\n10602 postgres 9 0 11476 11M 10308 S 0,0 1,1 0:09 postmaster\n10652 postgres 9 0 7840 7840 7020 S 0,0 0,7 0:00 postmaster\n10669 postgres 9 0 9076 9076 8224 S 0,0 0,8 0:00 postmaster\n10677 root 13 0 1032 1028 828 R 0,0 0,1 0:00 top\n\nI have now changed the SHMMAX settings to 545333248 and changed the\nshared_buffers to 65535 again. now postgres starts up correctly.\n\nthe top result changes to:\n\n 11:40am up 1 day, 17:20, 1 user, load average: 2,24, 2,51, 2,14\n57 processes: 55 sleeping, 2 running, 0 zombie, 0 stopped\nCPU states: 24,7% user, 11,3% system, 0,0% nice, 6,2% idle\nMem: 1020808K av, 1015844K used, 4964K free, 531420K shrd, 24796K buff\nSwap: 1028112K av, 60K used, 1028052K free 338376K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n11010 root 17 0 1036 1032 828 R 14,2 0,1 0:00 top\n11007 postgres 14 0 14268 13M 12668 R 9,7 1,3 0:00 postmaster\n11011 root 9 0 2184 2184 1584 S 3,0 0,2 0:00 sendmail\n 1 root 8 0 520 520 452 S 0,0 0,0 0:03 init\n 2 root 9 0 0 0 0 SW 0,0 0,0 0:00 keventd\n 3 root 9 0 0 0 0 SW 0,0 0,0 0:00 kapm-idled\n 4 root 19 19 0 0 0 SWN 0,0 0,0 0:00 ksoftirqd_CPU0\n 5 root 9 0 0 0 0 SW 0,0 0,0 0:29 kswapd\n 6 root 9 0 0 0 0 SW 0,0 0,0 0:00 kreclaimd\n 7 root 9 0 0 0 0 SW 0,0 0,0 0:09 bdflush\n 8 root 9 0 0 0 0 SW 0,0 0,0 0:00 kupdated\n 9 root -1 -20 0 0 0 SW< 0,0 0,0 0:00 mdrecoveryd\n 13 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 136 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 137 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 138 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 139 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n 140 root 9 0 0 0 0 SW 0,0 0,0 2:18 kjournald\n 378 root 9 0 0 0 0 SW 0,0 0,0 0:00 eth0\n 454 root 9 0 572 572 476 S 0,0 0,0 0:00 syslogd\n 459 root 9 0 1044 1044 392 S 0,0 0,1 0:00 klogd\n 572 root 8 0 1128 1092 968 S 0,0 0,1 0:07 sshd\n 584 root 9 0 1056 1056 848 S 0,0 0,1 0:02 nlservd\n 611 root 9 0 1836 1820 1288 S 0,0 0,1 0:00 sendmail\n 693 root 9 0 640 640 556 S 0,0 0,0 0:00 crond\n 729 daemon 9 0 472 464 404 S 0,0 0,0 0:00 atd\n 736 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 737 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 738 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 739 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 740 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 741 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n 9800 root 9 0 1888 1864 1552 S 0,0 0,1 0:03 sshd\n 9801 root 10 0 1368 1368 1016 S 0,0 0,1 0:00 bash\n10838 postgres 7 0 6992 6992 6924 S 0,0 0,6 0:00 postmaster\n10840 postgres 9 0 6984 6984 6932 S 0,0 0,6 0:00 postmaster\n10841 postgres 9 0 7024 7024 6932 S 0,0 0,6 0:00 postmaster\n10852 postgres 9 0 489M 489M 487M S 0,0 49,0 0:32 postmaster\n10869 postgres 9 0 357M 357M 356M S 0,0 35,8 0:21 postmaster\n10908 postgres 9 0 263M 263M 262M S 0,0 26,4 0:20 postmaster\n10909 postgres 9 0 283M 283M 281M S 0,0 28,4 0:19 postmaster\n10932 postgres 9 0 288M 288M 286M S 0,0 28,9 0:13 postmaster\n10946 postgres 9 0 213M 213M 211M S 0,0 21,4 0:06 postmaster\n10947 postgres 9 0 239M 239M 238M S 0,0 24,0 0:07 postmaster\n10948 postgres 9 0 292M 292M 290M S 0,0 29,2 0:09 postmaster\n10957 postgres 9 0 214M 214M 212M S 0,0 21,5 0:10 postmaster\n10964 postgres 9 0 58156 56M 56400 S 0,0 5,6 0:05 postmaster\n10974 postgres 9 0 50860 49M 49120 S 0,0 4,9 0:04 postmaster\n10975 postgres 9 0 209M 209M 207M S 0,0 21,0 0:04 postmaster\n10976 postgres 9 0 174M 174M 172M S 0,0 17,5 0:08 postmaster\n10977 postgres 9 0 52484 51M 50932 S 0,0 5,1 0:05 postmaster\n10990 postgres 9 0 199M 199M 197M S 0,0 19,9 0:06 postmaster\n10993 postgres 9 0 141M 141M 139M S 0,0 14,1 0:01 postmaster\n10998 postgres 9 0 181M 181M 180M S 0,0 18,2 0:04 postmaster\n10999 postgres 9 0 139M 139M 138M S 0,0 14,0 0:01 postmaster\n11001 postgres 9 0 45484 44M 43948 S 0,0 4,4 0:01 postmaster\n11006 postgres 9 0 15276 14M 13952 S 0,0 1,4 0:00 postmaster\n\n\nnow, does this look better in your eyes?\n\n> What are your current settings? Could you please repost. I lost earlier \n> thread(Sorry for that.. Had a HDD meltdown here couple of days back. Lost few \n> mails..)\n\ndo you need more information here?\n\n\n", "msg_date": "Wed, 13 Nov 2002 10:42:39 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On 13 Nov 2002 at 10:42, Henrik Steffen wrote:\n> > Yes. 2*max connection is minimum. Anything additional is always welcome as long \n> > as it does not starve the system.\n> \n> ok, I tried to set shared_buffers to 65535 now. but then restarting postgres\n> fails - it says: \n> \n> IpcMemoryCreate: shmget(key=5432001, size=545333248, 03600) failed: Invalid argument\n> \n> and a message telling me to either lower the shared_buffers or raise the\n> SHMMAX. \n\nYes. you need to raise SHMMAX. A good feature of recent linux distro. is that \nthey set SHMMAX to half of physical memory. A very good default IMO..\n\n> 11:06am up 1 day, 16:46, 1 user, load average: 1,32, 1,12, 1,22\n> 53 processes: 52 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU states: 24,5% user, 11,2% system, 0,0% nice, 5,6% idle\n> Mem: 1020808K av, 1006156K used, 14652K free, 8520K shrd, 37204K buff\n> Swap: 1028112K av, 60K used, 1028052K free 849776K cached\n> I have now changed the SHMMAX settings to 545333248 and changed the\n> shared_buffers to 65535 again. now postgres starts up correctly.\n> \n> the top result changes to:\n> \n> 11:40am up 1 day, 17:20, 1 user, load average: 2,24, 2,51, 2,14\n> 57 processes: 55 sleeping, 2 running, 0 zombie, 0 stopped\n> CPU states: 24,7% user, 11,3% system, 0,0% nice, 6,2% idle\n> Mem: 1020808K av, 1015844K used, 4964K free, 531420K shrd, 24796K buff\n> Swap: 1028112K av, 60K used, 1028052K free 338376K cached\n> now, does this look better in your eyes?\n\nWell, don't look at top to find out free memoy. Use free. On my machine..\n\n[shridhar@perth shridhar]$ free\n total used free shared buffers cached\nMem: 255828 250676 5152 0 66564 29604\n-/+ buffers/cache: 154508 101320\nSwap: 401616 12764 388852\n[shridhar@perth shridhar]$\n \nHere the important value is second value in second line, 101320. That's true \nfree memory. Remeber when system needs memory, it can always shrunk \ncache/buffers. In both of your stats, cache+memory are roughly 400MB.\n\nRelax, your system is not starving for memory...\n\n> do you need more information here?\n\nNot for this problem, but just curious. What does uname -a says?\n\nSecondly just curious, with 5000 requests per minute, what is the peak number \nof connection you are getting? You should look int pooling parameters for \nbetter performance..\n\nHTH\n\n\nBye\n Shridhar\n\n--\nHawkeye's Conclusion:\tIt's not easy to play the clown when you've got to run \nthe whole\tcircus.\n\n", "msg_date": "Wed, 13 Nov 2002 15:30:15 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "Dear Shridhar,\n\nok, so my system has got 362 MB of free RAM currently... this sounds good.\n\nuname -a says:\nLinux db2.city-map.de 2.4.7-10 #1 Thu Sep 6 17:27:27 EDT 2001 i686 unknown\n\nI didn't actually measure requests per minute through a longer period...\nI tested it 2 hours ago using debug and logging all queries, and I saw\napprox. 2500 requests per minute. but at that time of the day there are\nonly about 25 simultaneous users on our website. so i calculated 50\nusers and 5.000 rpm for average daytime usage. I guess the maximum peak\nwould be approx. 10.000 queries per minute.\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, November 13, 2002 11:00 AM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> On 13 Nov 2002 at 10:42, Henrik Steffen wrote:\n> > > Yes. 2*max connection is minimum. Anything additional is always welcome as long\n> > > as it does not starve the system.\n> >\n> > ok, I tried to set shared_buffers to 65535 now. but then restarting postgres\n> > fails - it says:\n> >\n> > IpcMemoryCreate: shmget(key=5432001, size=545333248, 03600) failed: Invalid argument\n> >\n> > and a message telling me to either lower the shared_buffers or raise the\n> > SHMMAX.\n>\n> Yes. you need to raise SHMMAX. A good feature of recent linux distro. is that\n> they set SHMMAX to half of physical memory. A very good default IMO..\n>\n> > 11:06am up 1 day, 16:46, 1 user, load average: 1,32, 1,12, 1,22\n> > 53 processes: 52 sleeping, 1 running, 0 zombie, 0 stopped\n> > CPU states: 24,5% user, 11,2% system, 0,0% nice, 5,6% idle\n> > Mem: 1020808K av, 1006156K used, 14652K free, 8520K shrd, 37204K buff\n> > Swap: 1028112K av, 60K used, 1028052K free 849776K cached\n> > I have now changed the SHMMAX settings to 545333248 and changed the\n> > shared_buffers to 65535 again. now postgres starts up correctly.\n> >\n> > the top result changes to:\n> >\n> > 11:40am up 1 day, 17:20, 1 user, load average: 2,24, 2,51, 2,14\n> > 57 processes: 55 sleeping, 2 running, 0 zombie, 0 stopped\n> > CPU states: 24,7% user, 11,3% system, 0,0% nice, 6,2% idle\n> > Mem: 1020808K av, 1015844K used, 4964K free, 531420K shrd, 24796K buff\n> > Swap: 1028112K av, 60K used, 1028052K free 338376K cached\n> > now, does this look better in your eyes?\n>\n> Well, don't look at top to find out free memoy. Use free. On my machine..\n>\n> [shridhar@perth shridhar]$ free\n> total used free shared buffers cached\n> Mem: 255828 250676 5152 0 66564 29604\n> -/+ buffers/cache: 154508 101320\n> Swap: 401616 12764 388852\n> [shridhar@perth shridhar]$\n>\n> Here the important value is second value in second line, 101320. That's true\n> free memory. Remeber when system needs memory, it can always shrunk\n> cache/buffers. In both of your stats, cache+memory are roughly 400MB.\n>\n> Relax, your system is not starving for memory...\n>\n> > do you need more information here?\n>\n> Not for this problem, but just curious. What does uname -a says?\n>\n> Secondly just curious, with 5000 requests per minute, what is the peak number\n> of connection you are getting? You should look int pooling parameters for\n> better performance..\n>\n> HTH\n>\n>\n> Bye\n> Shridhar\n>\n> --\n> Hawkeye's Conclusion: It's not easy to play the clown when you've got to run\n> the whole circus.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 13 Nov 2002 11:54:45 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On 13 Nov 2002 at 11:54, Henrik Steffen wrote:\n\n> Dear Shridhar,\n> \n> ok, so my system has got 362 MB of free RAM currently... this sounds good.\n\nCool.. Keep watching that.. If that goes down to less than 50, you certainly \nneed to look into..\n \n> uname -a says:\n> Linux db2.city-map.de 2.4.7-10 #1 Thu Sep 6 17:27:27 EDT 2001 i686 unknown\n\nhmm.. Some sort of RedHat I assume. Upgrade the kernel at least. Any variant of \n2.4.19.x should give you at least 10-15% performance increase. Besides that \nwill cure the linux VM fiascos as well..\n \n> I didn't actually measure requests per minute through a longer period...\n> I tested it 2 hours ago using debug and logging all queries, and I saw\n> approx. 2500 requests per minute. but at that time of the day there are\n> only about 25 simultaneous users on our website. so i calculated 50\n> users and 5.000 rpm for average daytime usage. I guess the maximum peak\n> would be approx. 10.000 queries per minute.\n\nHmm.. Certainly connection pooling will give you advantage. Tune it if you can( \nphp BTW?) \n\nHTH\n\n\nBye\n Shridhar\n\n--\ndivorce, n:\tA change of wife.\n\n", "msg_date": "Wed, 13 Nov 2002 16:32:42 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "> Cool.. Keep watching that.. If that goes down to less than 50, you certainly \n> need to look into..\n\nI will.\n \n> hmm.. Some sort of RedHat I assume. Upgrade the kernel at least. Any variant of \n> 2.4.19.x should give you at least 10-15% performance increase. Besides that \n> will cure the linux VM fiascos as well..\n\nredhat, right. Ok, I will have this tested.\n \n> Hmm.. Certainly connection pooling will give you advantage. Tune it if you can( \n> php BTW?) \n\nusing persistent connections with perl / apache mod_perl \n\n", "msg_date": "Wed, 13 Nov 2002 13:00:18 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n> 3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or \n> sort results of a query. If do these things seldomly, you can set this very low \n> or default. For individual session that creates index, you can set the sort \n> memory accordingly.\n\nWhat would the benefit of this be? sort_mem is just an upper limit on\nmemory consumption, and that memory is only allocated on demand. So\nthere shouldn't be a difference between setting sort_mem globally to\nsome reasonable value, and manually changing it for backends that need\nto do any sorting.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "13 Nov 2002 08:20:43 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On 13 Nov 2002 at 8:20, Neil Conway wrote:\n\n> \"Shridhar Daithankar\" <[email protected]> writes:\n> > 3) Sort mem is a tricky affair. AFAIU, it is used only when you create index or \n> > sort results of a query. If do these things seldomly, you can set this very low \n> > or default. For individual session that creates index, you can set the sort \n> > memory accordingly.\n> \n> What would the benefit of this be? sort_mem is just an upper limit on\n> memory consumption, and that memory is only allocated on demand. So\n> there shouldn't be a difference between setting sort_mem globally to\n> some reasonable value, and manually changing it for backends that need\n> to do any sorting.\n\nWell, while that is correct, setting sort mem high only when required would \nprevent memory exhaustion if that happens. \n\nRemember he has 5000 requests per minute with concurrent connection. Now say \nthere is a default high setting of sort mem and a connection persist for a long \ntime, it *might* accumulate memory. Personally I would not keep it high by \ndefault.\n\nBye\n Shridhar\n\n--\nAbsentee, n.:\tA person with an income who has had the forethought to remove\t\nhimself from the sphere of exaction.\t\t-- Ambrose Bierce, \"The Devil's \nDictionary\"\n\n", "msg_date": "Wed, 13 Nov 2002 18:55:06 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "> > What would the benefit of this be? sort_mem is just an upper limit on\n> > memory consumption, and that memory is only allocated on demand. So\n> > there shouldn't be a difference between setting sort_mem globally to\n> > some reasonable value, and manually changing it for backends that need\n> > to do any sorting.\n>\n> Well, while that is correct, setting sort mem high only when required\nwould\n> prevent memory exhaustion if that happens.\n>\n> Remember he has 5000 requests per minute with concurrent connection. Now\nsay\n> there is a default high setting of sort mem and a connection persist for a\nlong\n> time, it *might* accumulate memory. Personally I would not keep it high by\n> default.\n\nCould you elaborate on what exactly is a query requiring sorting (and\ntherefore is affected by sort_mem setting)?\n\nIs it a SELECT with WHERE-clause using seq scan? Is it rebuilding of an\nindex? What else could it be?\n\nRegards,\nBjoern\n\n", "msg_date": "Wed, 13 Nov 2002 14:30:01 +0100", "msg_from": "=?iso-8859-1?Q?Bj=F6rn_Metzdorf?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On 13 Nov 2002 at 14:30, Bj�rn Metzdorf wrote:\n\n> Could you elaborate on what exactly is a query requiring sorting (and\n> therefore is affected by sort_mem setting)?\n> Is it a SELECT with WHERE-clause using seq scan? Is it rebuilding of an\n> index? What else could it be?\n\nI can think of an sql query with an order by clause on a non-indexed field and \nsay that field is not included in where condition e.g.\n\nselect name, addreess from users where id>1000 order by name;\n\nwith index on id.\n\n\nBye\n Shridhar\n\n--\nYou canna change the laws of physics, Captain; I've got to have thirty minutes!\n\n", "msg_date": "Wed, 13 Nov 2002 19:04:27 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n\n> > hmm.. Some sort of RedHat I assume. Upgrade the kernel at least. Any variant of \n> > 2.4.19.x should give you at least 10-15% performance increase. Besides that \n> > will cure the linux VM fiascos as well..\n> \n> redhat, right. Ok, I will have this tested.\n\nIf you don't want to roll your own, I think RH has a newer errata\nkernel on updates.redhat.com that you can install as an RPM. \n\n-Doug\n", "msg_date": "13 Nov 2002 08:52:14 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n> 3) Sort mem is a tricky affair. AFAIU, it is used only when you create\n> index or sort results of a query. If do these things seldomly, you can\n> set this very low or default.\n\nI think this is bad advice. Sort memory is only consumed when needed,\nso there's no advantage in decreasing the setting just because you think\na particular client process isn't going to need to sort. All you will\naccomplish is to pessimize your performance if a sort does happen to be\nneeded.\n\nYou do need to set the installation default on the basis of thinking\nabout what will happen if all backends are trying to sort at once.\nBut having done that, you should be able to increase the setting in\nindividual sessions that you know are going to do large sorts.\n\nThe default setting (1024K) is, like most of the default settings\nin PG, on the small side IMHO.\n\n\n\nI don't care for advice that leads to allocating half of physical RAM to\nPG's shared buffers, either. This ignores the fact that the kernel's\ndisk caches are nearly as effective as PG's internal buffers, and much\nmore flexible (because the kernel can decrease the size of its caches\nwhen there's heavy memory pressure from processes). I'd start with a\nfew thousand shared buffers and let the kernel consume the bulk of RAM\nwith its buffering. That approach lets you use a higher sort_mem\nsetting, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Nov 2002 09:11:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "Henrik,\n\nFirst off, I'm moving this discussion to the PGSQL-PERFORMANCE list,\nwhere it belongs. To subscribe, send the message \"subscribe\npgsql-perform [email protected]\" to \"[email protected]\".\n\n> This is was I figured out now:\n> \n> 1) RAM available: 1024 MB, there's nothing else but postgres on this\n> machine, so if I calculate 128 MB for Linux, there are 896 MB left\n> for Postgres.\n> \n> 2) 70 % of 896 MB is 627 MB\n> \n> Now, if I follow your instructions:\n> \n> 250K +\n> 8.2K * 128 (shared_buffers) = 1049,6K +\n> 14.2K * 64 (max_connections) = 908,8K +\n> 1024K * 5000 (average number of requests per minute) = 5120000K\n> ===============================================================\n> 5122208.4K ==> 5002.16 MB\n> \n> this is a little bit more than I have available, isn't it? :(((\n> \n> sure that this has got to be the \"average number of requests per\n> minute\"\n> and not \"per second\" ? seems so much, doesn't it?\n> \n> what am I supposed to do now?\n\nWell, now it gets more complicated. You need to determine:\nA) The median processing time of each of those requests.\nB) The amount of Sort_mem actually required for each request.\n\nI reccommend \"per minute\" because that's an easy over-estimate ... few\nrequests last a full minute, and as a result\naverage-requests-per-minute gives you a safe guage of maximum\nconcurrent requests (in transactional database environments), which is\nreally what we are trying to determine. \n\nUm, you do know that I'm talking about *database* requests -- that is,\nqueries -- and not web page requests, yes? If you're using server-side\ncaching, there can be a *huge* difference.\n\nIf you have 5000 requests per minute, and only 64 connections, then I\ncan hypothesize that:\n1) you are doing some kind of connection pooling;\n2) those are exclusively *read-only* requests;\n3) those are very simple requests, or at least processed very quickly.\n\nIf all of the above is true, then you can probably base you calculation\non requests-per-second, rather than requests-per-minute.\n\nThen, of course, it becomes an interactive process. You change the\nsettings, re-start the database server, and watch the memory used by\nthe postgreSQL processes. Your goal is to have that memory usage\nhover around 700mb during heavy usage periods (any less, and you are\nthrottling the database through scarcity of RAM) but to never, ever,\nforce usage of Swap memory, which will slow down the server 10-fold.\n\nIf you see the RAM only at half that, but the processor at 90%+, then\nyou should consider upgrading your processor. But you're more likely\nto run out of RAM first. I believe that you haven't already because\nwith your low shared-buffer settings, most of the potential sort_mem is\ngoing unused.\n\nBTW, if you are *really* getting 5000 queries per minute, I would\nstrongly reccomend doubling your RAM.\n\n-Josh Berkus\n\n\n> \n> ----- Original Message -----\n> From: \"Josh Berkus\" <[email protected]>\n> To: <[email protected]>\n> Cc: <[email protected]>\n> Sent: Tuesday, November 12, 2002 9:05 PM\n> Subject: Re: Upgrade to dual processor machine?\n> \n> \n> Heinrik,\n> \n> \"So, where do i find and change shmmax shmall settings ??\n> What should I put there?\n> \n> What is a recommended value for shared buffers in postgresql.conf ?\"\n> \n> There is no \"recommended value.\" You have to calculate this\n> relatively:\n> \n> 1) Figure out how much RAM your server has available for PostgreSQL.\n> For\n> example, I have one server on which I allocate 256 mb for Apache, 128\n> mb for\n> linux, and thus have 512mb available for Postgres.\n> \n> 2) Calculate out the memory settings to use 70% of that amount of Ram\n> in\n> regular usage. Please beware that sort_mem is *not* shared, meaning\n> that it\n> will be multiplied by the number of concurrent requests requiring\n> sorting.\n> Thus, your calculation (in K) should be:\n> \n> 250K +\n> 8.2K * shared_buffers +\n> 14.2K * max_connections +\n> sort_mem * average number of requests per minute\n> =====================================\n> memory available to postgresql in K * 0.7\n> \n> You will also have to set SHMMAX and SHMMALL to accept this memory\n> allocation.\n> Since shmmax is set in bytes, then I generally feel safe making it:\n> 1024 * 0.5 * memory available to postgresql in K\n> \n> Setting them is done simply:\n> $ echo 134217728 >/proc/sys/kernel/shmall\n> $ echo 134217728 >/proc/sys/kernel/shmmax\n> \n> This is all taken from the postgresql documentation, with some\n> experience:\n>\nhttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html\n> \n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n", "msg_date": "Wed, 13 Nov 2002 09:05:35 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On Wed, 13 Nov 2002, Henrik Steffen wrote:\n\n> \n> Hello Shridhar,\n> \n> thanks for your answer...\n> \n> 1) in the docs it says: shared_buffers should be 2*max_connections, min 16.\n> now, you suggest to put it to 500-600 MB, which means I will have to\n> increase shared_buffers to 68683 -- is this really correct? I mean,\n> RAM is allready now almost totally consumed.\n\nActually, that's not quite correct. The RAM is already showing as being \nin use, but it's being used by the kernel as file cache, and will be \nreleased the second a process asks for more memory, so it really isn't \"in \nuse\" in the classic sense.\n\n> 2) the database has a size of 3.6 GB at the moment... about 100 user tables.\n> \n> 3) ok, I understand: I am not creating any indexes usually. Only once at night\n> all user indexes are dropped and recreated, I could imagine to increase the\n> sort_mem for this script... so sort_mem with 1024K is ok, or should it be\n> lowered to, say, 512K ?\n\nGenerally a sort mem of 8 meg or less is pretty safe, as the allocation is \nonly made WHILE the sort is running and is released right after. The \ndanger is that if it is set higher, like say 32 or 64 meg, and a dozen or \nso sql statements just happen to all sort at the same time, you can run \nout of memory and have a \"swap storm\" where the machine is swapping out \nprocesses one after the other to give each the amount of swap space it \nneeds. Note also that a SQL query with more than one sort in it will use \nup to sort_mem for each sort independently, so a dozen SQL queries that \neach require say three sorts all running at once can theoretically use \n36*sort_mem amount of memory. Once the machine starts swapping for \nsort_mem, things get slow VERY fast. It's one of those knees you don't \nwant to hit.\n\n\n", "msg_date": "Wed, 13 Nov 2002 10:18:19 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "Henrik\n\nOops! Two corrections, below. Sorry about the typos.\n\n> First off, I'm moving this discussion to the PGSQL-PERFORMANCE list,\n> where it belongs. To subscribe, send the message \"subscribe\n> pgsql-perform [email protected]\" to \"[email protected]\".\n\nSorry ... thats \"subscribe pgsql-performance [email protected]\".\n\n> Then, of course, it becomes an interactive process. You change the\n> settings, re-start the database server, and watch the memory used by\n> the postgreSQL processes. Your goal is to have that memory usage\n> hover around 700mb during heavy usage periods (any less, and you are\n\nThat's \"600mb\", not \"700mb\". \n\nI also just read Tom's response regarding reserving more RAM for kernel\nbuffering. This hasn't been my experience, but then I work mostly\nwith transactional databases (many read-write requests) rather than\nread-only databases. \n\nAs such, I'd be interested in a test: Calculate out your PostgreSQL\nRAM to total, say, 256mb and run a speed test on the database. Then\ncalculate it out to the previous 600mb, and do the same. I'd like to\nknow the results.\n\nIn fact, e-mail me off list if you want further help -- I'm interested\nin the outcome.\n\n-Josh Berkus\n", "msg_date": "Wed, 13 Nov 2002 09:21:50 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On 13 Nov 2002 at 9:21, Josh Berkus wrote:\n> I also just read Tom's response regarding reserving more RAM for kernel\n> buffering. This hasn't been my experience, but then I work mostly\n> with transactional databases (many read-write requests) rather than\n> read-only databases. \n> \n> As such, I'd be interested in a test: Calculate out your PostgreSQL\n> RAM to total, say, 256mb and run a speed test on the database. Then\n> calculate it out to the previous 600mb, and do the same. I'd like to\n> know the results.\n\nI would like to add here. Let's say you do large amount of reads/writes. Make \nsure that size of data exceeds RAM allocated to postgresql. Testing 100MB of \ndata with 256MB or 600MB of buffers isn't going to make any difference in \nperformance. If this is the practical scenario, then Tom's suggestion is a \nbetter solution.\n\nIMO postgresql buffers should be enough to hold data requierd for \ntransactions+some good chunk of read only data. Read only data can be left to \nOS buffers for most part of it..\n\nHTH\n\nBye\n Shridhar\n\n--\nSometimes a feeling is all we humans have to go on.\t\t-- Kirk, \"A Taste of \nArmageddon\", stardate 3193.9\n\n", "msg_date": "Thu, 14 Nov 2002 09:35:21 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "On Wed, Nov 13, 2002 at 10:18:19AM -0700, scott.marlowe wrote:\n> Generally a sort mem of 8 meg or less is pretty safe, as the allocation is \n> only made WHILE the sort is running and is released right after. The \n> danger is that if it is set higher, like say 32 or 64 meg, and a dozen or \n> so sql statements just happen to all sort at the same time, you can run \n> out of memory and have a \"swap storm\" where the machine is swapping out \n> processes one after the other to give each the amount of swap space it \n> needs. Note also that a SQL query with more than one sort in it will use \n> up to sort_mem for each sort independently, so a dozen SQL queries that \n> each require say three sorts all running at once can theoretically use \n> 36*sort_mem amount of memory. Once the machine starts swapping for \n> sort_mem, things get slow VERY fast. It's one of those knees you don't \n> want to hit.\n\nSomething I havn't seen mentioned yet is the very useful program \"vmstat\".\nIt gives you a quick summary of how many blocks are being copied to and from\ndisk, whether and how much you are swapping, how many processes are actually\nrunning and sleeping at any one time. You can tell it you give you an\naverage over any period of time (like a minute or an hour). Example:\n\n# vmstat 1\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 29728 47584 1128 63048 3 2 17 10 21 38 9 2 35\n 1 0 0 29728 47584 1128 63048 0 0 0 0 1539 356 6 0 94\n 1 0 0 29728 47552 1128 63080 0 0 32 0 1542 354 5 1 94\n 1 0 0 29728 47552 1128 63080 0 0 0 0 1551 355 6 0 94\n 1 0 0 29728 47520 1128 63112 0 0 32 0 1542 361 5 2 93\n\nAs you can see, not a terribly loaded machine :)\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> We place no reliance On Virgin or Pigeon; \n> Our method is Science, Our aim is Religion. - Aleister Crowley", "msg_date": "Thu, 14 Nov 2002 16:45:21 +1100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nok, now i subscribed to performance, too ;-)\n\n> Well, now it gets more complicated. You need to determine:\n> A) The median processing time of each of those requests.\n> B) The amount of Sort_mem actually required for each request.\n\nas I am dealing with postgres for a webserver the median processing\ntime of each request has got to be <1 sec. how can i measure\nthe amount of sort_mem needed?\n\nat the highest there are perhaps 20 concurrent database requests\nat the same time. i have enabled 64 maximum connections, because\ni have apache configured to use persistent database connections\nusing mod_perl and pg.pm. I set Apache to run MaxClients at 40\n(there could additionally be some manual psql connections)\n\n> Um, you do know that I'm talking about *database* requests -- that is,\n> queries -- and not web page requests, yes? If you're using server-side\n> caching, there can be a *huge* difference.\n\nyes, I did understand that. And when I measured 5.000 requests per minute\nI looked at the pgsql.log (after enabling the debug options and counting\nall the queries within minute). so, server-side caching does not appear\nwithin these 5.000 requests... that's for sure.\n\n> If you have 5000 requests per minute, and only 64 connections, then I\n> can hypothesize that:\n> 1) you are doing some kind of connection pooling;\n> 2) those are exclusively *read-only* requests;\n> 3) those are very simple requests, or at least processed very quickly\n\n1) correct\n2) no, not exclusively - but as it's a webserver-application (www.city-map.de)\nmost users just read from the database, while they always do an update\nto raise some statistics (page-views counters etc.) - furthermore, there is\nan internal content-management system where about 100 editors do inserts and\nupdates. but there are of course more visitors (>10.000 daily) than editors.\n3) yes, many requests are very simple for better performance in a web-application\n\nSwapping does never happen so far.\n\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>; <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Wednesday, November 13, 2002 6:05 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> Henrik,\n>\n> First off, I'm moving this discussion to the PGSQL-PERFORMANCE list,\n> where it belongs. To subscribe, send the message \"subscribe\n> pgsql-perform [email protected]\" to \"[email protected]\".\n>\n> > This is was I figured out now:\n> >\n> > 1) RAM available: 1024 MB, there's nothing else but postgres on this\n> > machine, so if I calculate 128 MB for Linux, there are 896 MB left\n> > for Postgres.\n> >\n> > 2) 70 % of 896 MB is 627 MB\n> >\n> > Now, if I follow your instructions:\n> >\n> > 250K +\n> > 8.2K * 128 (shared_buffers) = 1049,6K +\n> > 14.2K * 64 (max_connections) = 908,8K +\n> > 1024K * 5000 (average number of requests per minute) = 5120000K\n> > ===============================================================\n> > 5122208.4K ==> 5002.16 MB\n> >\n> > this is a little bit more than I have available, isn't it? :(((\n> >\n> > sure that this has got to be the \"average number of requests per\n> > minute\"\n> > and not \"per second\" ? seems so much, doesn't it?\n> >\n> > what am I supposed to do now?\n>\n> Well, now it gets more complicated. You need to determine:\n> A) The median processing time of each of those requests.\n> B) The amount of Sort_mem actually required for each request.\n>\n> I reccommend \"per minute\" because that's an easy over-estimate ... few\n> requests last a full minute, and as a result\n> average-requests-per-minute gives you a safe guage of maximum\n> concurrent requests (in transactional database environments), which is\n> really what we are trying to determine.\n>\n> Um, you do know that I'm talking about *database* requests -- that is,\n> queries -- and not web page requests, yes? If you're using server-side\n> caching, there can be a *huge* difference.\n>\n> If you have 5000 requests per minute, and only 64 connections, then I\n> can hypothesize that:\n> 1) you are doing some kind of connection pooling;\n> 2) those are exclusively *read-only* requests;\n> 3) those are very simple requests, or at least processed very quickly.\n>\n> If all of the above is true, then you can probably base you calculation\n> on requests-per-second, rather than requests-per-minute.\n>\n> Then, of course, it becomes an interactive process. You change the\n> settings, re-start the database server, and watch the memory used by\n> the postgreSQL processes. Your goal is to have that memory usage\n> hover around 700mb during heavy usage periods (any less, and you are\n> throttling the database through scarcity of RAM) but to never, ever,\n> force usage of Swap memory, which will slow down the server 10-fold.\n>\n> If you see the RAM only at half that, but the processor at 90%+, then\n> you should consider upgrading your processor. But you're more likely\n> to run out of RAM first. I believe that you haven't already because\n> with your low shared-buffer settings, most of the potential sort_mem is\n> going unused.\n>\n> BTW, if you are *really* getting 5000 queries per minute, I would\n> strongly reccomend doubling your RAM.\n>\n> -Josh Berkus\n>\n>\n> >\n> > ----- Original Message -----\n> > From: \"Josh Berkus\" <[email protected]>\n> > To: <[email protected]>\n> > Cc: <[email protected]>\n> > Sent: Tuesday, November 12, 2002 9:05 PM\n> > Subject: Re: Upgrade to dual processor machine?\n> >\n> >\n> > Heinrik,\n> >\n> > \"So, where do i find and change shmmax shmall settings ??\n> > What should I put there?\n> >\n> > What is a recommended value for shared buffers in postgresql.conf ?\"\n> >\n> > There is no \"recommended value.\" You have to calculate this\n> > relatively:\n> >\n> > 1) Figure out how much RAM your server has available for PostgreSQL.\n> > For\n> > example, I have one server on which I allocate 256 mb for Apache, 128\n> > mb for\n> > linux, and thus have 512mb available for Postgres.\n> >\n> > 2) Calculate out the memory settings to use 70% of that amount of Ram\n> > in\n> > regular usage. Please beware that sort_mem is *not* shared, meaning\n> > that it\n> > will be multiplied by the number of concurrent requests requiring\n> > sorting.\n> > Thus, your calculation (in K) should be:\n> >\n> > 250K +\n> > 8.2K * shared_buffers +\n> > 14.2K * max_connections +\n> > sort_mem * average number of requests per minute\n> > =====================================\n> > memory available to postgresql in K * 0.7\n> >\n> > You will also have to set SHMMAX and SHMMALL to accept this memory\n> > allocation.\n> > Since shmmax is set in bytes, then I generally feel safe making it:\n> > 1024 * 0.5 * memory available to postgresql in K\n> >\n> > Setting them is done simply:\n> > $ echo 134217728 >/proc/sys/kernel/shmall\n> > $ echo 134217728 >/proc/sys/kernel/shmmax\n> >\n> > This is all taken from the postgresql documentation, with some\n> > experience:\n> >\n> http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/runtime.html\n> >\n> > --\n> > -Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Thu, 14 Nov 2002 10:28:17 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine? " }, { "msg_contents": "hi,\n\nthis is what my vmstat 1 5 looks like --- cute tool, didn't know it yet - thanks!\n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 2 1 1 60 4940 10288 344212 0 0 158 74 14 31 25 9 66\n 0 3 1 60 4940 10428 343680 0 0 6548 280 500 595 14 10 76\n 0 5 1 60 4940 10488 343148 0 0 7732 180 658 983 14 12 74\n 0 4 1 60 4964 10540 344536 0 0 6364 268 513 715 11 5 84\n 0 4 1 60 4964 10588 344056 0 0 5180 360 578 610 21 6 73\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Martijn van Oosterhout\" <[email protected]>\nTo: \"scott.marlowe\" <[email protected]>\nCc: \"Henrik Steffen\" <[email protected]>; <[email protected]>; <[email protected]>\nSent: Thursday, November 14, 2002 6:45 AM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n", "msg_date": "Thu, 14 Nov 2002 10:41:09 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "\nhi Ivan\n\nto get the shared buffer memory I followed all the instructions\nI gathered here on the list within the last two days. the kernel\nsettings SHMMAX etc. were important here in my opinion... you could\nsearch the archives for all the other mails within this thread and\ntry yourself.\n\nby the way: today we update to kernel 2.4.19 and we measured BIG\nperformance gains! however, since the upgrade 'top' doesn't show any\nshared memory in the summary any longer... yet for every process\nit lists a certain amount of shared mem... is this a kernel/top issue\nor did I miss something here?\n\nthe kernel is much more performant!\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"pginfo\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nSent: Thursday, November 14, 2002 3:15 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> Hi,\n> Sorry for this question.\n> I see you have 8520K shrd .\n> How are you setup you linux box to use tis shared buffers? (any answer will be great).\n>\n>\n>\n> I have tryed it many times without success.\n>\n> Also it is courios that the sum of cpu loading is not 100% !\n>\n> We are using two pg servers:\n> one single processor Intel 1 GHz, 1 GB RAM,\n> and one dual Intel 1GHz, 1.5 GB RAM.\n> It exist big diference in pg performance and I noticed many times when the system use\n> all two processors.\n>\n> If I can help you with more info you are free to ask.\n>\n> regards,\n> Ivan.\n>\n>\n> Henrik Steffen wrote:\n>\n> > dear shridhar,\n> >\n> > > Yes. 2*max connection is minimum. Anything additional is always welcome as long\n> > > as it does not starve the system.\n> >\n> > ok, I tried to set shared_buffers to 65535 now. but then restarting postgres\n> > fails - it says:\n> >\n> > IpcMemoryCreate: shmget(key=5432001, size=545333248, 03600) failed: Invalid argument\n> >\n> > and a message telling me to either lower the shared_buffers or raise the\n> > SHMMAX.\n> >\n> > > If you have a gig of memory and shared buffers are 536MB as you have indicated,\n> > > who is taking rest of the RAM?\n> >\n> > well, I guess it's postgres... see the output of top below:\n> >\n> > 11:06am up 1 day, 16:46, 1 user, load average: 1,32, 1,12, 1,22\n> > 53 processes: 52 sleeping, 1 running, 0 zombie, 0 stopped\n> > CPU states: 24,5% user, 11,2% system, 0,0% nice, 5,6% idle\n> > Mem: 1020808K av, 1006156K used, 14652K free, 8520K shrd, 37204K buff\n> > Swap: 1028112K av, 60K used, 1028052K free 849776K cached\n> >\n> > PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n> > 10678 root 19 0 2184 2184 1584 S 2,9 0,2 0:00 sendmail\n> > 1 root 8 0 520 520 452 S 0,0 0,0 0:03 init\n> > 2 root 9 0 0 0 0 SW 0,0 0,0 0:00 keventd\n> > 3 root 9 0 0 0 0 SW 0,0 0,0 0:00 kapm-idled\n> > 4 root 19 19 0 0 0 SWN 0,0 0,0 0:00 ksoftirqd_CPU0\n> > 5 root 9 0 0 0 0 SW 0,0 0,0 0:28 kswapd\n> > 6 root 9 0 0 0 0 SW 0,0 0,0 0:00 kreclaimd\n> > 7 root 9 0 0 0 0 SW 0,0 0,0 0:09 bdflush\n> > 8 root 9 0 0 0 0 SW 0,0 0,0 0:00 kupdated\n> > 9 root -1 -20 0 0 0 SW< 0,0 0,0 0:00 mdrecoveryd\n> > 13 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 136 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 137 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 138 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 139 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 140 root 9 0 0 0 0 SW 0,0 0,0 2:16 kjournald\n> > 378 root 9 0 0 0 0 SW 0,0 0,0 0:00 eth0\n> > 454 root 9 0 572 572 476 S 0,0 0,0 0:00 syslogd\n> > 459 root 9 0 1044 1044 392 S 0,0 0,1 0:00 klogd\n> > 572 root 8 0 1128 1092 968 S 0,0 0,1 0:07 sshd\n> > 584 root 9 0 1056 1056 848 S 0,0 0,1 0:02 nlservd\n> > 611 root 8 0 1836 1820 1288 S 0,0 0,1 0:00 sendmail\n> > 693 root 9 0 640 640 556 S 0,0 0,0 0:00 crond\n> > 729 daemon 9 0 472 464 404 S 0,0 0,0 0:00 atd\n> > 736 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 737 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 738 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 739 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 740 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 741 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 9800 root 9 0 1888 1864 1552 S 0,0 0,1 0:02 sshd\n> > 9801 root 16 0 1368 1368 1016 S 0,0 0,1 0:00 bash\n> > 10574 postgres 0 0 1448 1448 1380 S 0,0 0,1 0:00 postmaster\n> > 10576 postgres 9 0 1436 1436 1388 S 0,0 0,1 0:00 postmaster\n> > 10577 postgres 9 0 1480 1480 1388 S 0,0 0,1 0:00 postmaster\n> > 10579 postgres 14 0 11500 11M 10324 S 0,0 1,1 0:08 postmaster\n> > 10580 postgres 9 0 11672 11M 10328 S 0,0 1,1 0:03 postmaster\n> > 10581 postgres 14 0 11620 11M 10352 S 0,0 1,1 0:08 postmaster\n> > 10585 postgres 11 0 11560 11M 10304 S 0,0 1,1 0:08 postmaster\n> > 10588 postgres 9 0 11520 11M 10316 S 0,0 1,1 0:14 postmaster\n> > 10589 postgres 9 0 11632 11M 10324 S 0,0 1,1 0:06 postmaster\n> > 10590 postgres 10 0 11620 11M 10320 S 0,0 1,1 0:06 postmaster\n> > 10591 postgres 9 0 11536 11M 10320 S 0,0 1,1 0:08 postmaster\n> > 10592 postgres 11 0 11508 11M 10316 S 0,0 1,1 0:04 postmaster\n> > 10595 postgres 9 0 11644 11M 10324 S 0,0 1,1 0:03 postmaster\n> > 10596 postgres 11 0 11664 11M 10328 S 0,0 1,1 0:08 postmaster\n> > 10597 postgres 9 0 11736 11M 10340 S 0,0 1,1 0:24 postmaster\n> > 10598 postgres 9 0 11500 11M 10312 S 0,0 1,1 0:10 postmaster\n> > 10599 postgres 11 0 11676 11M 10324 S 0,0 1,1 0:13 postmaster\n> > 10602 postgres 9 0 11476 11M 10308 S 0,0 1,1 0:09 postmaster\n> > 10652 postgres 9 0 7840 7840 7020 S 0,0 0,7 0:00 postmaster\n> > 10669 postgres 9 0 9076 9076 8224 S 0,0 0,8 0:00 postmaster\n> > 10677 root 13 0 1032 1028 828 R 0,0 0,1 0:00 top\n> >\n> > I have now changed the SHMMAX settings to 545333248 and changed the\n> > shared_buffers to 65535 again. now postgres starts up correctly.\n> >\n> > the top result changes to:\n> >\n> > 11:40am up 1 day, 17:20, 1 user, load average: 2,24, 2,51, 2,14\n> > 57 processes: 55 sleeping, 2 running, 0 zombie, 0 stopped\n> > CPU states: 24,7% user, 11,3% system, 0,0% nice, 6,2% idle\n> > Mem: 1020808K av, 1015844K used, 4964K free, 531420K shrd, 24796K buff\n> > Swap: 1028112K av, 60K used, 1028052K free 338376K cached\n> >\n> > PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n> > 11010 root 17 0 1036 1032 828 R 14,2 0,1 0:00 top\n> > 11007 postgres 14 0 14268 13M 12668 R 9,7 1,3 0:00 postmaster\n> > 11011 root 9 0 2184 2184 1584 S 3,0 0,2 0:00 sendmail\n> > 1 root 8 0 520 520 452 S 0,0 0,0 0:03 init\n> > 2 root 9 0 0 0 0 SW 0,0 0,0 0:00 keventd\n> > 3 root 9 0 0 0 0 SW 0,0 0,0 0:00 kapm-idled\n> > 4 root 19 19 0 0 0 SWN 0,0 0,0 0:00 ksoftirqd_CPU0\n> > 5 root 9 0 0 0 0 SW 0,0 0,0 0:29 kswapd\n> > 6 root 9 0 0 0 0 SW 0,0 0,0 0:00 kreclaimd\n> > 7 root 9 0 0 0 0 SW 0,0 0,0 0:09 bdflush\n> > 8 root 9 0 0 0 0 SW 0,0 0,0 0:00 kupdated\n> > 9 root -1 -20 0 0 0 SW< 0,0 0,0 0:00 mdrecoveryd\n> > 13 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 136 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 137 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 138 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 139 root 9 0 0 0 0 SW 0,0 0,0 0:00 kjournald\n> > 140 root 9 0 0 0 0 SW 0,0 0,0 2:18 kjournald\n> > 378 root 9 0 0 0 0 SW 0,0 0,0 0:00 eth0\n> > 454 root 9 0 572 572 476 S 0,0 0,0 0:00 syslogd\n> > 459 root 9 0 1044 1044 392 S 0,0 0,1 0:00 klogd\n> > 572 root 8 0 1128 1092 968 S 0,0 0,1 0:07 sshd\n> > 584 root 9 0 1056 1056 848 S 0,0 0,1 0:02 nlservd\n> > 611 root 9 0 1836 1820 1288 S 0,0 0,1 0:00 sendmail\n> > 693 root 9 0 640 640 556 S 0,0 0,0 0:00 crond\n> > 729 daemon 9 0 472 464 404 S 0,0 0,0 0:00 atd\n> > 736 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 737 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 738 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 739 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 740 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 741 root 9 0 448 448 384 S 0,0 0,0 0:00 mingetty\n> > 9800 root 9 0 1888 1864 1552 S 0,0 0,1 0:03 sshd\n> > 9801 root 10 0 1368 1368 1016 S 0,0 0,1 0:00 bash\n> > 10838 postgres 7 0 6992 6992 6924 S 0,0 0,6 0:00 postmaster\n> > 10840 postgres 9 0 6984 6984 6932 S 0,0 0,6 0:00 postmaster\n> > 10841 postgres 9 0 7024 7024 6932 S 0,0 0,6 0:00 postmaster\n> > 10852 postgres 9 0 489M 489M 487M S 0,0 49,0 0:32 postmaster\n> > 10869 postgres 9 0 357M 357M 356M S 0,0 35,8 0:21 postmaster\n> > 10908 postgres 9 0 263M 263M 262M S 0,0 26,4 0:20 postmaster\n> > 10909 postgres 9 0 283M 283M 281M S 0,0 28,4 0:19 postmaster\n> > 10932 postgres 9 0 288M 288M 286M S 0,0 28,9 0:13 postmaster\n> > 10946 postgres 9 0 213M 213M 211M S 0,0 21,4 0:06 postmaster\n> > 10947 postgres 9 0 239M 239M 238M S 0,0 24,0 0:07 postmaster\n> > 10948 postgres 9 0 292M 292M 290M S 0,0 29,2 0:09 postmaster\n> > 10957 postgres 9 0 214M 214M 212M S 0,0 21,5 0:10 postmaster\n> > 10964 postgres 9 0 58156 56M 56400 S 0,0 5,6 0:05 postmaster\n> > 10974 postgres 9 0 50860 49M 49120 S 0,0 4,9 0:04 postmaster\n> > 10975 postgres 9 0 209M 209M 207M S 0,0 21,0 0:04 postmaster\n> > 10976 postgres 9 0 174M 174M 172M S 0,0 17,5 0:08 postmaster\n> > 10977 postgres 9 0 52484 51M 50932 S 0,0 5,1 0:05 postmaster\n> > 10990 postgres 9 0 199M 199M 197M S 0,0 19,9 0:06 postmaster\n> > 10993 postgres 9 0 141M 141M 139M S 0,0 14,1 0:01 postmaster\n> > 10998 postgres 9 0 181M 181M 180M S 0,0 18,2 0:04 postmaster\n> > 10999 postgres 9 0 139M 139M 138M S 0,0 14,0 0:01 postmaster\n> > 11001 postgres 9 0 45484 44M 43948 S 0,0 4,4 0:01 postmaster\n> > 11006 postgres 9 0 15276 14M 13952 S 0,0 1,4 0:00 postmaster\n> >\n> > now, does this look better in your eyes?\n> >\n> > > What are your current settings? Could you please repost. I lost earlier\n> > > thread(Sorry for that.. Had a HDD meltdown here couple of days back. Lost few\n> > > mails..)\n> >\n> > do you need more information here?\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n>\n>\n\n", "msg_date": "Thu, 14 Nov 2002 16:20:56 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "hi,\n\nthis is what it look like right now... looks like 69 MB of shared memory...\n\n------ Shared Memory Segments --------\nkey shmid owner perms Bytes nattch Status\n0x0052e2c1 131072 postgres 600 69074944 19\n\n------ Semaphore Arrays --------\nkey semid owner perms nsems Status\n0x0052e2c1 655360 postgres 600 17\n0x0052e2c2 688129 postgres 600 17\n0x0052e2c3 720898 postgres 600 17\n\n------ Message Queues --------\nkey msqid owner perms used-bytes messages\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nSent: Thursday, November 14, 2002 4:25 PM\nSubject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n\n\n> On Thursday 14 November 2002 08:50 pm, you wrote:\n> > by the way: today we update to kernel 2.4.19 and we measured BIG\n> > performance gains! however, since the upgrade 'top' doesn't show any\n> > shared memory in the summary any longer... yet for every process\n> > it lists a certain amount of shared mem... is this a kernel/top issue\n> > or did I miss something here?\n>\n> No. The shared memory accounting is turned off because it is seemingly much\n> complex. Process do share memory. Check output of ipcs as root..\n>\n> Shridhar\n\n", "msg_date": "Thu, 14 Nov 2002 16:31:29 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "On Thursday 14 November 2002 09:01 pm, you wrote:\n> this is what it look like right now... looks like 69 MB of shared memory...\n> ------ Shared Memory Segments --------\n> key shmid owner perms Bytes nattch Status\n> 0x0052e2c1 131072 postgres 600 69074944 19\n\nWell, if you sample this figure for min/max/avg usage say for a day, you will \nhave sufficient idea as in what are your exact requirements are in terms of \nshared buffers. I would say 5% more that would prove to be much more optimal \nsetting. IMO it's worth of experiement..\n\nJust start out with pretty high to leave some room..\n\n Shridhar\n\n", "msg_date": "Thu, 14 Nov 2002 21:05:28 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "\nHenrik,\n\n> > Well, now it gets more complicated. You need to determine:\n> > A) The median processing time of each of those requests.\n> > B) The amount of Sort_mem actually required for each request.\n> \n> as I am dealing with postgres for a webserver the median processing\n> time of each request has got to be <1 sec. how can i measure\n> the amount of sort_mem needed?\n\nThrough experimentation, mostly. SInce you are in a high-volume, small-query \nenvironment, I would try *lowering* your sort mem to see if that has an \nadverse impact on queries. A good quick test would be to cut your sort_mem \nin half, and then run an EXPLAIN on the query from which you expect the \nlargest result set, and see if the SORT time on the query has been increased.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 15 Nov 2002 10:55:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" } ]
[ { "msg_contents": "Hi!\n\nI've seen on this list some calculations concerning buffers and sort_mem\nsettings. Could you tell me if there is a document about such a\ncalculation? I'd like to use nearly all of my RAM for postgres.\n\nRichard.\n\n\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Thu, 14 Nov 2002 10:43:44 +0100", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Docs about buffers and sortmem setting" }, { "msg_contents": "On Thu, Nov 14, 2002 at 10:43:44AM +0100, Ryszard Lach wrote:\n> Hi!\n> \n> I've seen on this list some calculations concerning buffers and sort_mem\n> settings. Could you tell me if there is a document about such a\n> calculation? I'd like to use nearly all of my RAM for postgres.\n\nProbably that's not true. You'll likely cause swapping if you try\nto.\n\nThe general rule of thumb is to try about 25% of physical memory for\nyour buffer size. Some people like to increase from there, until\nswapping starts, and then back off; but there are arguments against\ndoing this, given the efficiency of modern filesystem buffering. \n\nIn practice, a buffer size in the tens of thousands is probably\nadequate. We actually have discovered long-term negative performance\neffects if the buffers are set too large.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 14 Nov 2002 06:56:14 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" }, { "msg_contents": "> The general rule of thumb is to try about 25% of physical memory for\n> your buffer size. Some people like to increase from there, until\n> swapping starts, and then back off; but there are arguments against\n> doing this, given the efficiency of modern filesystem buffering.\n\nHow about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM\nmachine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux\nuser space can handle only 1 GB and the rest is for kernel buffer and\ncache..\n\nRegards,\nBjoern\n\n", "msg_date": "Thu, 14 Nov 2002 13:19:57 +0100", "msg_from": "\"Bjoern Metzdorf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" }, { "msg_contents": "On Thu, Nov 14, 2002 at 01:19:57PM +0100, Bjoern Metzdorf wrote:\n\n> How about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM\n> machine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux\n> user space can handle only 1 GB and the rest is for kernel buffer and\n> cache..\n\nHow big is your data set? If it's smaller than 800 MB, you're\nwasting the buffers anyway.\n\nThe thing is that the OS will buffer what you read anyway, so\ndepending on how large your buffers are and how much memory your\nfilesystem is able to use for its buffersm, you may actually be storing\ntwice in memory everything in the shared memory: once in the shared\narea, and another time in the filesystem buffer.\n\nOn our 16 G Solaris (Ultra SPARC) boxes, we found that using a gig\nfor shared buffers was actually worse than a slightly lower amount,\nunder Sol 7. The filesystem buffering is too good, so even though\nthe system call to the \"filesystem\" (which turns out to be just to\nmemory, because of the buffer) has a measurable cost, the\nimplementation of the shared-buffer handling is bad enough that it\ncosts _more_ to manage large buffers. Smaller buffers seem not to\nface the difficulty. I haven't a clue why.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 14 Nov 2002 08:35:14 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On our 16 G Solaris (Ultra SPARC) boxes, we found that using a gig\n> for shared buffers was actually worse than a slightly lower amount,\n> under Sol 7. The filesystem buffering is too good, so even though\n> the system call to the \"filesystem\" (which turns out to be just to\n> memory, because of the buffer) has a measurable cost, the\n> implementation of the shared-buffer handling is bad enough that it\n> costs _more_ to manage large buffers. Smaller buffers seem not to\n> face the difficulty. I haven't a clue why.\n\nWell, part of the reason is that a lot of the data in shared_buffers\nhas to be effectively duplicated in the kernel's I/O caches, because\nit's frequently accessed. So while I'd think the cost of fetching a\npage from the buffer pool is lower than from the OS' cache, increasing\nthe size of the Postgres buffer pool effectively decreases the total\namount of RAM available for caching.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 12:20:49 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" }, { "msg_contents": "On Thu, Nov 14, 2002 at 12:20:49PM -0500, Neil Conway wrote:\n> Well, part of the reason is that a lot of the data in shared_buffers\n> has to be effectively duplicated in the kernel's I/O caches, because\n> it's frequently accessed. So while I'd think the cost of fetching a\n> page from the buffer pool is lower than from the OS' cache, increasing\n> the size of the Postgres buffer pool effectively decreases the total\n> amount of RAM available for caching.\n\nWell, yes, but on a machine with 16 G and a data set < 16 G, that's\nnot the issue. A 1G shared buffer is too big anyway, according to\nour experience: it's fast at the beginning, but performance degrades. \nI don't know why.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 14 Nov 2002 13:01:19 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" }, { "msg_contents": "On Thu, 14 Nov 2002, Bjoern Metzdorf wrote:\n\n> > The general rule of thumb is to try about 25% of physical memory for\n> > your buffer size. Some people like to increase from there, until\n> > swapping starts, and then back off; but there are arguments against\n> > doing this, given the efficiency of modern filesystem buffering.\n> \n> How about 32-bit Linux machines with more than 1 GB RAM? We have a 2 GB RAM\n> machine running, and I gave 800 MB to postgres shared buffers. AFAIK Linux\n> user space can handle only 1 GB and the rest is for kernel buffer and\n> cache..\n\nActually, I think the limit is 2 or 3 gig depending on how your kernel was \ncompiled, but testing by folks on the list seems to show a maximum of \nunder 2 gig. I'm a little fuzzy on it, you might wanna search the \narchives. I'm not sure if that was a linux or a postgresql problem, and \nit was reported several months back.\n\nMemory slowly fading.... :-)\n\n", "msg_date": "Thu, 14 Nov 2002 12:27:42 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs about buffers and sortmem setting" } ]
[ { "msg_contents": "\nHi all,\n\nthis is how it looks like, when my system is busy (right now!!!)\n\n50 concurrent visitors at the same time surfing through our web-pages\n\nps ax | grep postgres:\n22568 ? S 0:00 postgres: stats buffer process\n22569 ? S 0:00 postgres: stats collector process\n22577 ? S 0:15 postgres: postgres kunden 62.116.172.180 INSERT\n22578 ? S 0:19 postgres: postgres kunden 62.116.172.180 UPDATE\n22582 ? S 0:14 postgres: postgres kunden 62.116.172.180 idle\n22583 ? S 0:30 postgres: postgres kunden 62.116.172.180 idle\n22584 ? S 0:19 postgres: postgres kunden 62.116.172.180 idle\n22586 ? S 0:17 postgres: postgres kunden 62.116.172.180 idle\n22587 ? S 0:15 postgres: postgres kunden 62.116.172.180 idle\n22588 ? S 0:20 postgres: postgres kunden 62.116.172.180 INSERT\n22590 ? S 0:15 postgres: postgres kunden 62.116.172.180 INSERT\n22592 ? S 0:18 postgres: postgres kunden 62.116.172.180 INSERT\n22593 ? S 0:15 postgres: postgres kunden 62.116.172.180 idle\n22594 ? S 0:19 postgres: postgres kunden 62.116.172.180 UPDATE\n22601 ? D 0:22 postgres: postgres kunden 62.116.172.180 SELECT\n22643 ? S 0:14 postgres: postgres kunden 62.116.172.180 idle\n22730 ? D 0:10 postgres: postgres kunden 62.116.172.180 SELECT\n22734 ? D 0:08 postgres: postgres kunden 62.116.172.180 SELECT\n22753 ? S 0:10 postgres: postgres kunden 62.116.172.180 SELECT\n22754 ? S 0:05 postgres: postgres kunden 62.116.172.180 idle\n22755 ? S 0:02 postgres: postgres kunden 62.116.172.180 idle\n22756 ? S 0:02 postgres: postgres kunden 62.116.172.180 idle\n22762 ? S 0:05 postgres: postgres kunden 62.116.172.180 UPDATE\n22764 ? D 0:04 postgres: postgres kunden 62.116.172.180 SELECT\n22765 ? S 0:02 postgres: postgres kunden 62.116.172.180 UPDATE\n22766 ? D 0:02 postgres: postgres kunden 62.116.172.180 SELECT\n22787 ? S 0:02 postgres: postgres kunden 62.116.172.180 idle\n22796 ? S 0:00 postgres: postgres kunden 62.116.172.180 UPDATE\n22803 ? S 0:00 postgres: postgres kunden 62.116.172.180 idle\n22804 ? S 0:01 postgres: postgres kunden 62.116.172.180 idle\n22805 ? S 0:01 postgres: postgres kunden 62.116.172.180 idle\n22806 ? S 0:00 postgres: postgres kunden 62.116.172.180 idle\n22807 ? S 0:00 postgres: postgres kunden 62.116.172.180 idle\n22809 ? D 0:00 postgres: postgres kunden 62.116.172.180 SELECT\n22814 ? S 0:01 postgres: postgres kunden 62.116.172.180 idle\n22815 ? D 0:03 postgres: postgres kunden 62.116.172.180 SELECT\n22818 ? S 0:00 postgres: postgres kunden 62.116.172.180 idle\n22821 ? S 0:00 postgres: postgres kunden 62.116.172.180 idle\n22824 ? S 0:01 postgres: postgres kunden 62.116.172.180 UPDATE\n22825 ? S 0:00 postgres: postgres kunden 62.116.172.180 UPDATE\n22829 ? S 0:00 postgres: checkpoint subprocess\n22830 ? S 0:00 postgres: postgres kunden 62.116.172.180 INSERT\n22832 pts/0 S 0:00 grep postgres\n\n-> I count 20 concurrent database queries above ...\n\nvmstat 1 5:\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n\n\nfree:\n total used free shared buffers cached\nMem: 1020808 1015860 4948 531424 5972 309548\n-/+ buffers/cache: 700340 320468\nSwap: 1028112 60 1028052\n\n\nw:\n12:04pm up 2 days, 17:44, 1 user, load average: 10.28, 7.22, 3.88\nUSER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT\nroot pts/0 condor.city-map. 11:46am 0.00s 0.09s 0.01s w\n\n\nthis is when things begin to go more slowly....\n\nany advice?\n\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n", "msg_date": "Thu, 14 Nov 2002 11:03:54 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "On 14 Nov 2002 at 11:03, Henrik Steffen wrote:\n> vmstat 1 5:\n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n> 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n> 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n> 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n> 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n> w:\n> 12:04pm up 2 days, 17:44, 1 user, load average: 10.28, 7.22, 3.88\n> USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT\n> root pts/0 condor.city-map. 11:46am 0.00s 0.09s 0.01s w\n> this is when things begin to go more slowly....\n\nTwo things immediately noticable.. Load average and block ins..\n\nEither your disk write BW is saturated or CPU is too full, which I believe is \nthe case. HAve you ever got faster write performance than 12K blocks say? Disk \nBW may be a bottleneck here.. Are they IDE disks?\n\nBesides almost transactions are insert/update.. And if you have 11K blocks per \nsecond to write.. I suggest you vacuum analyse most used table one in a minute \nor so. Decide the best frequency by trial and error. A good start is double the \n\ntime it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave \na gap of 120 sec. between two runs of vacuum.\n\nYou need to vacuum only those tables which are heavily updated. This will make \nvacuum faster.\n\nHTH\nBye\n Shridhar\n\n--\nNouvelle cuisine, n.:\tFrench for \"not enough food\".Continental breakfast, n.:\t\nEnglish for \"not enough food\".Tapas, n.:\tSpanish for \"not enough food\".Dim Sum, \n\nn.:\tChinese for more food than you've ever seen in your entire life.\n\n", "msg_date": "Thu, 14 Nov 2002 15:58:17 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "On Thu, 14 Nov 2002 11:03:54 +0100, \"Henrik Steffen\"\n<[email protected]> wrote:\n>this is how it looks like, when my system is busy (right now!!!)\n>vmstat 1 5:\n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n> 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n> 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n> 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n> 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n\nMore than 10000 disk blocks coming in per second looks quite\nimpressive, IMHO. (I wonder if this is due to seq scans?) But the\ncpu idle column tells us that you are not CPU bound any more.\n\n\n>free:\n> total used free shared buffers cached\n>Mem: 1020808 1015860 4948 531424 5972 309548\n>-/+ buffers/cache: 700340 320468\n>Swap: 1028112 60 1028052\n\nThere are two camps when it comes to PG shared buffers: (a) set\nshared_buffers as high as possible to minimize PG buffer misses vs.\n(b) assume that transfers between OS and PG buffers are cheap and\nchoose a moderate value for shared_buffers (\"in the low thousands\") to\nlet the operating system's disk caching do its work.\n\nBoth camps agree that reserving half of your available memory for\nshared buffers is a Bad Thing, because whenever a page cannot be found\nin PG's buffers it is almost certainly not in the OS cache and has to\nbe fetched from disk. So half of your memory (the OS cache) is wasted\nfor nothing.\n\nFYI, I belong to the latter camp and I strongly feel you should set\nshared_buffers to something near 4000.\n\nServus\n Manfred\n", "msg_date": "Thu, 14 Nov 2002 18:15:45 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "of course, there are some seq scans... one of the most\ndifficult queries is for example a kind of full text\nsearch, that searches through 8 different tables with\neach between 300.000 and 500.000 rows and 5-50 columns,\nbut that's a different issue (need a full-text-search-engine...)\n\n\nI will do some experiments with both camps you described\n\n\nThanks to all of you who wrote answers to this thread\n\nIt has helped me a huge lot !\n\n\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Manfred Koizar\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Thursday, November 14, 2002 6:15 PM\nSubject: Re: [GENERAL] Upgrade to dual processor machine?\n\n\n> On Thu, 14 Nov 2002 11:03:54 +0100, \"Henrik Steffen\"\n> <[email protected]> wrote:\n> >this is how it looks like, when my system is busy (right now!!!)\n> >vmstat 1 5:\n> > procs memory swap io system cpu\n> > r b w swpd free buff cache si so bi bo in cs us sy id\n> > 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n> > 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n> > 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n> > 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n> > 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n>\n> More than 10000 disk blocks coming in per second looks quite\n> impressive, IMHO. (I wonder if this is due to seq scans?) But the\n> cpu idle column tells us that you are not CPU bound any more.\n>\n>\n> >free:\n> > total used free shared buffers cached\n> >Mem: 1020808 1015860 4948 531424 5972 309548\n> >-/+ buffers/cache: 700340 320468\n> >Swap: 1028112 60 1028052\n>\n> There are two camps when it comes to PG shared buffers: (a) set\n> shared_buffers as high as possible to minimize PG buffer misses vs.\n> (b) assume that transfers between OS and PG buffers are cheap and\n> choose a moderate value for shared_buffers (\"in the low thousands\") to\n> let the operating system's disk caching do its work.\n>\n> Both camps agree that reserving half of your available memory for\n> shared buffers is a Bad Thing, because whenever a page cannot be found\n> in PG's buffers it is almost certainly not in the OS cache and has to\n> be fetched from disk. So half of your memory (the OS cache) is wasted\n> for nothing.\n>\n> FYI, I belong to the latter camp and I strongly feel you should set\n> shared_buffers to something near 4000.\n>\n> Servus\n> Manfred\n>\n\n", "msg_date": "Thu, 14 Nov 2002 20:36:28 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine?" }, { "msg_contents": "Hi Shridhar,\n\ndo you seriously think that I should vacuum frequently updated/inserted\ntables every 120 seconds ?\n\nThis is what it says in the manual and what I have been doing until today:\n\n\"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to run VACUUM\nevery night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The command exclusively\nlocks the table while processing. \"\n\nAnd:\n\n\"You should run VACUUM ANALYZE when a table is initially loaded and when a table's data changes dramatically. \"\n\n\nI have many UPDATEs and INSERTs on my log-statistics. For each http-request\nthere will be an INSERT into the logfile. And if certain customer pages\nare downloaded there will even be an UPDATE in a customer-statistics table\ncausing a hits column to be set to hits+1... I didn't think this was a\ndramatical change so far.\n\nStill sure to run VACUUM ANALYZE on these tables so often?\n\nVACUUM ANALYZE takes about 30 seconds on one of these tables and will be done once\nevery night automatically.\n\n\n> Besides almost transactions are insert/update.. And if you have 11K blocks per\n> second to write.. I suggest you vacuum analyse most used table one in a minute\n> or so. Decide the best frequency by trial and error. A good start is double the\n> time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave\n> a gap of 120 sec. between two runs of vacuum.\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Shridhar Daithankar\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, November 14, 2002 11:28 AM\nSubject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n\n\n> On 14 Nov 2002 at 11:03, Henrik Steffen wrote:\n> > vmstat 1 5:\n> > procs memory swap io system cpu\n> > r b w swpd free buff cache si so bi bo in cs us sy id\n> > 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n> > 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n> > 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n> > 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n> > 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n> > w:\n> > 12:04pm up 2 days, 17:44, 1 user, load average: 10.28, 7.22, 3.88\n> > USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT\n> > root pts/0 condor.city-map. 11:46am 0.00s 0.09s 0.01s w\n> > this is when things begin to go more slowly....\n>\n> Two things immediately noticable.. Load average and block ins..\n>\n> Either your disk write BW is saturated or CPU is too full, which I believe is\n> the case. HAve you ever got faster write performance than 12K blocks say? Disk\n> BW may be a bottleneck here.. Are they IDE disks?\n>\n> Besides almost transactions are insert/update.. And if you have 11K blocks per\n> second to write.. I suggest you vacuum analyse most used table one in a minute\n> or so. Decide the best frequency by trial and error. A good start is double the\n>\n> time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave\n> a gap of 120 sec. between two runs of vacuum.\n>\n> You need to vacuum only those tables which are heavily updated. This will make\n> vacuum faster.\n>\n> HTH\n> Bye\n> Shridhar\n>\n> --\n> Nouvelle cuisine, n.: French for \"not enough food\".Continental breakfast, n.:\n> English for \"not enough food\".Tapas, n.: Spanish for \"not enough food\".Dim Sum,\n>\n> n.: Chinese for more food than you've ever seen in your entire life.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n", "msg_date": "Thu, 14 Nov 2002 21:36:33 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n\n> This is what it says in the manual and what I have been doing until today:\n> \n> \"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to run VACUUM\n> every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The command exclusively\n> locks the table while processing. \"\n\nThe \"exclusive lock\" part is no longer true as of 7.2.X--it is now\nmuch cheaper to run VACUUM. What version were you running again?\n\n-Doug\n", "msg_date": "14 Nov 2002 15:50:58 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "Henrik,\n\n> do you seriously think that I should vacuum frequently\n> updated/inserted\n> tables every 120 seconds ?\n> \n> This is what it says in the manual and what I have been doing until\n> today:\n> \n> \"You should run VACUUM periodically to clean out expired rows. For\n> tables that are heavily modified, it is useful to run VACUUM\n> every night in an automated manner. For tables with few\n> modifications, VACUUM should be run less frequently. The command\n> exclusively\n> locks the table while processing. \"\n> \n> And:\n> \n> \"You should run VACUUM ANALYZE when a table is initially loaded and\n> when a table's data changes dramatically. \"\n\nThat's the postgres *7.1* manual you're reading. You need to read the\n7.2 online manual; VACUUM has changed substantially.\n\n-Josh Berkus\n\n\n", "msg_date": "Thu, 14 Nov 2002 12:57:42 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "I use the Async Query technique of PG to do such tasks as Vacuum-ing.\n\nHenrik Steffen wrote:\n\n>Hi Shridhar,\n>\n>do you seriously think that I should vacuum frequently updated/inserted\n>tables every 120 seconds ?\n>\n>This is what it says in the manual and what I have been doing until today:\n>\n>\"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to run VACUUM\n>every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The command exclusively\n>locks the table while processing. \"\n>\n>And:\n>\n>\"You should run VACUUM ANALYZE when a table is initially loaded and when a table's data changes dramatically. \"\n>\n>\n>I have many UPDATEs and INSERTs on my log-statistics. For each http-request\n>there will be an INSERT into the logfile. And if certain customer pages\n>are downloaded there will even be an UPDATE in a customer-statistics table\n>causing a hits column to be set to hits+1... I didn't think this was a\n>dramatical change so far.\n>\n>Still sure to run VACUUM ANALYZE on these tables so often?\n>\n>VACUUM ANALYZE takes about 30 seconds on one of these tables and will be done once\n>every night automatically.\n>\n>\n> \n>\n>>Besides almost transactions are insert/update.. And if you have 11K blocks per\n>>second to write.. I suggest you vacuum analyse most used table one in a minute\n>>or so. Decide the best frequency by trial and error. A good start is double the\n>>time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave\n>>a gap of 120 sec. between two runs of vacuum.\n>> \n>>\n>\n>--\n>\n>Mit freundlichem Gru�\n>\n>Henrik Steffen\n>Gesch�ftsf�hrer\n>\n>top concepts Internetmarketing GmbH\n>Am Steinkamp 7 - D-21684 Stade - Germany\n>--------------------------------------------------------\n>http://www.topconcepts.com Tel. +49 4141 991230\n>mail: [email protected] Fax. +49 4141 991233\n>--------------------------------------------------------\n>24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n>--------------------------------------------------------\n>Ihr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\n>System-Partner gesucht: http://www.franchise.city-map.de\n>--------------------------------------------------------\n>Handelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n>--------------------------------------------------------\n>\n>----- Original Message -----\n>From: \"Shridhar Daithankar\" <[email protected]>\n>To: <[email protected]>\n>Sent: Thursday, November 14, 2002 11:28 AM\n>Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n>\n>\n> \n>\n>>On 14 Nov 2002 at 11:03, Henrik Steffen wrote:\n>> \n>>\n>>>vmstat 1 5:\n>>> procs memory swap io system cpu\n>>> r b w swpd free buff cache si so bi bo in cs us sy id\n>>> 1 8 1 60 4964 5888 309684 0 0 176 74 16 32 25 9 66\n>>> 0 6 3 60 4964 5932 308772 0 0 6264 256 347 347 13 9 78\n>>> 0 5 1 60 4964 5900 309364 0 0 9312 224 380 309 11 6 83\n>>> 1 4 1 60 5272 5940 309152 0 0 10320 116 397 429 17 6 77\n>>> 1 4 1 60 4964 5896 309512 0 0 11020 152 451 456 14 10 76\n>>>w:\n>>>12:04pm up 2 days, 17:44, 1 user, load average: 10.28, 7.22, 3.88\n>>>USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT\n>>>root pts/0 condor.city-map. 11:46am 0.00s 0.09s 0.01s w\n>>>this is when things begin to go more slowly....\n>>> \n>>>\n>>Two things immediately noticable.. Load average and block ins..\n>>\n>>Either your disk write BW is saturated or CPU is too full, which I believe is\n>>the case. HAve you ever got faster write performance than 12K blocks say? Disk\n>>BW may be a bottleneck here.. Are they IDE disks?\n>>\n>>Besides almost transactions are insert/update.. And if you have 11K blocks per\n>>second to write.. I suggest you vacuum analyse most used table one in a minute\n>>or so. Decide the best frequency by trial and error. A good start is double the\n>>\n>>time it takes for vacuum. i.e. if vacuum analyse takes 60 sec to finish, leave\n>>a gap of 120 sec. between two runs of vacuum.\n>>\n>>You need to vacuum only those tables which are heavily updated. This will make\n>>vacuum faster.\n>>\n>>HTH\n>>Bye\n>> Shridhar\n>>\n>>--\n>>Nouvelle cuisine, n.: French for \"not enough food\".Continental breakfast, n.:\n>>English for \"not enough food\".Tapas, n.: Spanish for \"not enough food\".Dim Sum,\n>>\n>>n.: Chinese for more food than you've ever seen in your entire life.\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: Have you searched our list archives?\n>>\n>>http://archives.postgresql.org\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n>\n\n\n\n", "msg_date": "Thu, 14 Nov 2002 13:05:26 -0800", "msg_from": "Medi Montaseri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "On 14 Nov, Henrik Steffen wrote:\n> of course, there are some seq scans... one of the most\n> difficult queries is for example a kind of full text\n> search, that searches through 8 different tables with\n> each between 300.000 and 500.000 rows and 5-50 columns,\n> but that's a different issue (need a full-text-search-engine...)\n\nAh, well, it may be worthwhile to check out fulltextindex or tsearch\nin contrib/. They both require some changes to the way you do queries,\nbut they may be helpful in speeding up those queries.\n\n-johnnnnnnnnn\n\n\n", "msg_date": "Thu, 14 Nov 2002 15:26:05 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "On 14 Nov, Henrik Steffen wrote:\n> of course, there are some seq scans... one of the most\n> difficult queries is for example a kind of full text\n> search, that searches through 8 different tables with\n> each between 300.000 and 500.000 rows and 5-50 columns,\n> but that's a different issue (need a full-text-search-engine...)\n\nAh, well, it may be worthwhile to check out fulltextindex or tsearch\nin contrib/. They both require some changes to the way you do queries,\nbut they may be helpful in speeding up those queries.\n\n-johnnnnnnnnn\n\n\n", "msg_date": "Thu, 14 Nov 2002 15:26:05 -0600 (CST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "\"Henrik Steffen\" <[email protected]> writes:\n> I have many UPDATEs and INSERTs on my log-statistics. For each\n> http-request there will be an INSERT into the logfile. And if\n> certain customer pages are downloaded there will even be an UPDATE\n> in a customer-statistics table causing a hits column to be set to\n> hits+1... I didn't think this was a dramatical change so far.\n\nJust to clarify, INSERT does not create dead rows -- tables that have\nlots of INSERTS don't need to be vacuumed particularly often. In\ncontrast, an UPDATE is really a DELETE plus an INSERT, so it *will*\ncreate dead rows.\n\nTo get an idea of how many dead tuples there are in a table, try\ncontrib/pgstattuple (maybe it's only in 7.3's contrib/, not sure).\n\n> Still sure to run VACUUM ANALYZE on these tables so often?\n\nWell, the ANALYZE part is probably rarely needed, as I wouldn't think\nthe statistical distribution of the data in the table changes very\nfrequently -- so maybe run a database-wide ANALYZE once per day? But\nif a table is updated frequently, VACUUM frequently is definately a\ngood idea.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <[email protected]> || PGP Key ID: DB3C29FC\n\n", "msg_date": "14 Nov 2002 16:28:29 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "Henrik,\n\n> Ah, well, it may be worthwhile to check out fulltextindex or tsearch\n> in contrib/. They both require some changes to the way you do\n> queries,\n> but they may be helpful in speeding up those queries.\n\nOr OpenFTS: www.openfts.org\n\n-Josh\n", "msg_date": "Thu, 14 Nov 2002 15:37:28 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "On 14 Nov 2002 at 21:36, Henrik Steffen wrote:\n\n> do you seriously think that I should vacuum frequently updated/inserted\n> tables every 120 seconds ?\n\nIts not about 120 seconds. Its about how many new and dead tuples your server \nis generating.\n\nHere is a quick summary\n\ninsert: New tuple:vacuum analyse updates that statistics.\nupdate: Causes a dead tuple: Vacuum analyse marks dead tuple for reuse saving \nbuffer space. \ndelete: Causes a dead unusable tuple: Vacuum full is required to reclaim the \nspace on the disk.\n\nVacuum analyse is nonblocking and vacuum full is blocking.\n\nIf you are generating 10 dead pages i.e. 80K of data in matter of minutes. \nvacuum is warranted for optimal performance..\n\n> I have many UPDATEs and INSERTs on my log-statistics. For each http-request\n> there will be an INSERT into the logfile. And if certain customer pages\n> are downloaded there will even be an UPDATE in a customer-statistics table\n> causing a hits column to be set to hits+1... I didn't think this was a\n> dramatical change so far.\n\nOK.. Schedule a cron job that would vacuum analyse every 5/10 minutes.. And see \nif that gives you overall increase in throughput\n\n> Still sure to run VACUUM ANALYZE on these tables so often?\n\nIMO you should..\n\nAlso have a look at http://gborg.postgresql.org/project/pgavd/projdisplay.php.\n\nI have written it but I don't know anybody using it. If you use it, I can help \nyou with any bugfixes required. I haven't done too much testing on it. It \nvacuums things based on traffic rather than time. So your database performance \nshould ideally be maintained automatically..\n\nLet me know if you need anything on this.. And use the CVS version please..\n\n\nBye\n Shridhar\n\n--\nlove, n.:\tWhen, if asked to choose between your lover\tand happiness, you'd skip \nhappiness in a heartbeat.\n\n", "msg_date": "Fri, 15 Nov 2002 12:40:25 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "Concerning the VACUUM issue:\nIn order to test my DB perfomance, I made a script that populates it with\ntest data (about a million rows to start with).\nThe INSERT insert in one of the table triggers an UPDATE in 3 related\ntables, which mean row size is about 50 bytes.\nI found out that it was *essential* to VACUUM the updated tables every 500\nINSERT or so to keep the performance from *heavily* dropping.\nThat's about every 73kB updated or so.\nNow, I guess this memory \"limit\" is depending of PG's configuration and the\nOS characteritics.\n\nIs there any setting in the conf file that is related to this VACUUM and\ndead tuples issue ? Could the \"free-space map\" settings be related (I never\nunderstood what were these settings) ?\n\nBTW, thanx to all of you participating to this thread. Nice to have such a\ncomplete overlook on PG's performance tuning and related OS issues.\n\n\tCedric D.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Shridhar\n> Daithankar\n> Sent: Friday, November 15, 2002 08:10\n> To: [email protected]; [email protected]\n> Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n>\n>\n> On 14 Nov 2002 at 21:36, Henrik Steffen wrote:\n>\n> > do you seriously think that I should vacuum frequently updated/inserted\n> > tables every 120 seconds ?\n>\n> Its not about 120 seconds. Its about how many new and dead tuples\n> your server\n> is generating.\n>\n> Here is a quick summary\n>\n> insert: New tuple:vacuum analyse updates that statistics.\n> update: Causes a dead tuple: Vacuum analyse marks dead tuple for\n> reuse saving\n> buffer space.\n> delete: Causes a dead unusable tuple: Vacuum full is required to\n> reclaim the\n> space on the disk.\n>\n> Vacuum analyse is nonblocking and vacuum full is blocking.\n>\n> If you are generating 10 dead pages i.e. 80K of data in matter of\n> minutes.\n> vacuum is warranted for optimal performance..\n>\n> > I have many UPDATEs and INSERTs on my log-statistics. For each\n> http-request\n> > there will be an INSERT into the logfile. And if certain customer pages\n> > are downloaded there will even be an UPDATE in a\n> customer-statistics table\n> > causing a hits column to be set to hits+1... I didn't think this was a\n> > dramatical change so far.\n>\n> OK.. Schedule a cron job that would vacuum analyse every 5/10\n> minutes.. And see\n> if that gives you overall increase in throughput\n>\n> > Still sure to run VACUUM ANALYZE on these tables so often?\n>\n> IMO you should..\n>\n> Also have a look at\nhttp://gborg.postgresql.org/project/pgavd/projdisplay.php.\n\nI have written it but I don't know anybody using it. If you use it, I can\nhelp\nyou with any bugfixes required. I haven't done too much testing on it. It\nvacuums things based on traffic rather than time. So your database\nperformance\nshould ideally be maintained automatically..\n\nLet me know if you need anything on this.. And use the CVS version please..\n\n\nBye\n Shridhar\n\n--\nlove, n.:\tWhen, if asked to choose between your lover\tand happiness, you'd\nskip\nhappiness in a heartbeat.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Fri, 15 Nov 2002 09:47:23 +0100", "msg_from": "=?US-ASCII?Q?Cedric_Dufour_=28Cogito_Ergo_Soft=29?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "running 7.2.1 here\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"Doug McNaught\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: <[email protected]>; <[email protected]>; <[email protected]>\nSent: Thursday, November 14, 2002 9:50 PM\nSubject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n\n\n> \"Henrik Steffen\" <[email protected]> writes:\n>\n> > This is what it says in the manual and what I have been doing until today:\n> >\n> > \"You should run VACUUM periodically to clean out expired rows. For tables that are heavily modified, it is useful to run VACUUM\n> > every night in an automated manner. For tables with few modifications, VACUUM should be run less frequently. The command\nexclusively\n> > locks the table while processing. \"\n>\n> The \"exclusive lock\" part is no longer true as of 7.2.X--it is now\n> much cheaper to run VACUUM. What version were you running again?\n>\n> -Doug\n>\n\n", "msg_date": "Fri, 15 Nov 2002 11:11:06 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" }, { "msg_contents": "On Fri, Nov 15, 2002 at 09:47:23AM +0100, Cedric Dufour (Cogito Ergo Soft) wrote:\n> Is there any setting in the conf file that is related to this VACUUM and\n> dead tuples issue ? Could the \"free-space map\" settings be related (I never\n> understood what were these settings) ?\n\nYes. That's what those settings are.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 15 Nov 2002 07:59:17 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Andrew\n> Sullivan\n> Sent: Friday, November 15, 2002 13:59\n> To: [email protected]\n> Subject: Re: [PERFORM] [GENERAL] Upgrade to dual processor machine?\n>\n>\n> On Fri, Nov 15, 2002 at 09:47:23AM +0100, Cedric Dufour (Cogito\n> Ergo Soft) wrote:\n> > Is there any setting in the conf file that is related to this VACUUM and\n> > dead tuples issue ? Could the \"free-space map\" settings be\n> related (I never\n> > understood what were these settings) ?\n>\n> Yes. That's what those settings are.\n>\n\nThe 'Runtime configuration / General operation' part of the doc is quite\nshort on the subject.\n\nIs there any other places to look for more details on this FSM ? What policy\nshould drive changes to the FSM settings ?\n\nI guess allowing larger FSM values might improve UPDATE performance (require\nVACUUM less often) but consume RAM that may be more useful elsewhere. Am I\nright ?\n\nHas any one made experience on that matter and what conclusion were drawn ?\nIn other words, shall we try to alter this FSM settings for better\nperfomance or is it better to stick to a regular (shortly timed) VACUUM\nscenario ?\n\n\tCedric\n\n\n", "msg_date": "Fri, 15 Nov 2002 14:37:01 +0100", "msg_from": "=?US-ASCII?Q?Cedric_Dufour_=28Cogito_Ergo_Soft=29?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "On Fri, Nov 15, 2002 at 02:37:01PM +0100, Cedric Dufour (Cogito Ergo Soft) wrote:\n\n> The 'Runtime configuration / General operation' part of the doc is quite\n> short on the subject.\n\nI'm afriad the setting was new in 7.2, and people don't have a great\ndeal of experience with it. So it's difficult to ake\nrecommendations.\n\n> Is there any other places to look for more details on this FSM ? What policy\n> should drive changes to the FSM settings ?\n\nIf your tables change a lot between VACUUM, the FSM may fill up. the\nproblem is that the system is only to keep \"in mind\" so much\ninformation about how many pages can be freed. This affects the\nre-use of disk space.\n\n> I guess allowing larger FSM values might improve UPDATE performance (require\n> VACUUM less often) but consume RAM that may be more useful elsewhere. Am I\n> right ?\n\nNot really. Your better bet is to perform VACUUM often; but if you\ndon't do that, then VACUUM will be able to re-claim more space in the\ntable if your FSM is larger. Is that clear-ish?\n\nYou can estimate the correct value, apparently, by doing some\ncalculations about disk space and turnover. I think it requires some\ncycles where you do VACUUM FULL. There was a discussion on the\n-general list about it some months ago, IIRC.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 15 Nov 2002 09:34:59 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Upgrade to dual processor machine?" }, { "msg_contents": "\nOn Fri, 15 Nov 2002, Shridhar Daithankar wrote:\n\n> On 14 Nov 2002 at 21:36, Henrik Steffen wrote:\n>\n> > do you seriously think that I should vacuum frequently updated/inserted\n> > tables every 120 seconds ?\n>\n> Its not about 120 seconds. Its about how many new and dead tuples your server\n> is generating.\n>\n> Here is a quick summary\n>\n> insert: New tuple:vacuum analyse updates that statistics.\n> update: Causes a dead tuple: Vacuum analyse marks dead tuple for reuse saving\n> buffer space.\n> delete: Causes a dead unusable tuple: Vacuum full is required to reclaim the\n> space on the disk.\n\nAFAIK, the delete line above is wrong. Deleted heap space should be able\nto be reclaimed with normal vacuums within the limitations of the free\nspace map, etc...\n\n", "msg_date": "Fri, 15 Nov 2002 07:46:47 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Upgrade to dual processor machine?" } ]
[ { "msg_contents": "On 14 Nov 2002 at 10:30, Wei Weng wrote:\n\n> The term had been mentioned often enough on this mailing list. Can\n> someone enlighten me with some description or a URL where I can read on?\n> And why is it important to postgresql database performace?\n\nWhen programs request more memory than available, OS 'swaps' some memory to \nspecial area on disk and make the memory available. To programs, it gives \nappearance that nearly infinite memory is available.\n\nUnfortunately disk are hell slower than RAM and hence swapping slows things \ndown as it takes much to swap in to disk and swap out of disk. Since OS does \nnot care which programs get swapped, it is possible that postgresql instance \ncan get swapped. That slows down effective memory access to knees..\n\nThat's why for good performance, a serve should never swap..\n\nBye\n Shridhar\n\n--\nPeterson's Admonition:\tWhen you think you're going down for the third time --\t\njust remember that you may have counted wrong.\n\n", "msg_date": "Thu, 14 Nov 2002 20:07:24 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: swapping?" }, { "msg_contents": "\nUsually vmstat shows page ins. I also write a paper on this:\n\n\thttp://techdocs.postgresql.org/redir.php?link=http://www.ca.postgresql.org/docs/momjian/hw_performance/\n\nThe techdocs site has other stuff, see \"Optimzation\" section:\n\n\thttp://techdocs.postgresql.org/\n\n---------------------------------------------------------------------------\n\nWei Weng wrote:\n> How do you notice that if a system started swapping or not?\n> \n> Thanks\n> \n> On Thu, 2002-11-14 at 09:37, Shridhar Daithankar wrote:\n> > On 14 Nov 2002 at 10:30, Wei Weng wrote:\n> > \n> > > The term had been mentioned often enough on this mailing list. Can\n> > > someone enlighten me with some description or a URL where I can read on?\n> > > And why is it important to postgresql database performace?\n> > \n> > When programs request more memory than available, OS 'swaps' some memory to \n> > special area on disk and make the memory available. To programs, it gives \n> > appearance that nearly infinite memory is available.\n> > \n> > Unfortunately disk are hell slower than RAM and hence swapping slows things \n> > down as it takes much to swap in to disk and swap out of disk. Since OS does \n> > not care which programs get swapped, it is possible that postgresql instance \n> > can get swapped. That slows down effective memory access to knees..\n> > \n> > That's why for good performance, a serve should never swap..\n> > \n> > Bye\n> > Shridhar\n> > \n> > --\n> > Peterson's Admonition:\tWhen you think you're going down for the third time --\t\n> > just remember that you may have counted wrong.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> -- \n> Wei Weng\n> Network Software Engineer\n> KenCast Inc.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Nov 2002 09:56:22 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" }, { "msg_contents": "On Thu, Nov 14, 2002 at 11:07:00AM -0500, Wei Weng wrote:\n\n> 1: I see sort_mem setting in my postgresql.conf, what is the \"buffer\n> size\" people often talk about?\n\nIt's the shared buffer configuration option in your config file.\n\n> 2: What is the unit of sort_mem in postgresql.conf? In my basic(default)\n> installation, I have sort_mem=512. Is that 512MBs or 512KBs?\n\nIt's in kilobytes.\n\nIt appears that it might profit you to read the administrator's\nguide. There's lots in the Fine Manuals. The section on runtime\nconfiguration is at\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/runtime-config.html\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 14 Nov 2002 10:17:27 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" }, { "msg_contents": "The term had been mentioned often enough on this mailing list. Can\nsomeone enlighten me with some description or a URL where I can read on?\nAnd why is it important to postgresql database performace?\n\nThanks\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "14 Nov 2002 10:30:17 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "swapping?" }, { "msg_contents": "How do you notice that if a system started swapping or not?\n\nThanks\n\nOn Thu, 2002-11-14 at 09:37, Shridhar Daithankar wrote:\n> On 14 Nov 2002 at 10:30, Wei Weng wrote:\n> \n> > The term had been mentioned often enough on this mailing list. Can\n> > someone enlighten me with some description or a URL where I can read on?\n> > And why is it important to postgresql database performace?\n> \n> When programs request more memory than available, OS 'swaps' some memory to \n> special area on disk and make the memory available. To programs, it gives \n> appearance that nearly infinite memory is available.\n> \n> Unfortunately disk are hell slower than RAM and hence swapping slows things \n> down as it takes much to swap in to disk and swap out of disk. Since OS does \n> not care which programs get swapped, it is possible that postgresql instance \n> can get swapped. That slows down effective memory access to knees..\n> \n> That's why for good performance, a serve should never swap..\n> \n> Bye\n> Shridhar\n> \n> --\n> Peterson's Admonition:\tWhen you think you're going down for the third time --\t\n> just remember that you may have counted wrong.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "14 Nov 2002 10:53:30 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" }, { "msg_contents": "I hope this doesn't sound too stupid. :)\n\n1: I see sort_mem setting in my postgresql.conf, what is the \"buffer\nsize\" people often talk about?\n\n2: What is the unit of sort_mem in postgresql.conf? In my basic(default)\ninstallation, I have sort_mem=512. Is that 512MBs or 512KBs?\n\nThanks\n\nOn Thu, 2002-11-14 at 09:56, Bruce Momjian wrote:\n> Usually vmstat shows page ins. I also write a paper on this:\n> \n> \thttp://techdocs.postgresql.org/redir.php?link=http://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> The techdocs site has other stuff, see \"Optimzation\" section:\n> \n> \thttp://techdocs.postgresql.org/\n> \n> ---------------------------------------------------------------------------\n> \n> Wei Weng wrote:\n> > How do you notice that if a system started swapping or not?\n> > \n> > Thanks\n> > \n> > On Thu, 2002-11-14 at 09:37, Shridhar Daithankar wrote:\n> > > On 14 Nov 2002 at 10:30, Wei Weng wrote:\n> > > \n> > > > The term had been mentioned often enough on this mailing list. Can\n> > > > someone enlighten me with some description or a URL where I can read on?\n> > > > And why is it important to postgresql database performace?\n> > > \n> > > When programs request more memory than available, OS 'swaps' some memory to \n> > > special area on disk and make the memory available. To programs, it gives \n> > > appearance that nearly infinite memory is available.\n> > > \n> > > Unfortunately disk are hell slower than RAM and hence swapping slows things \n> > > down as it takes much to swap in to disk and swap out of disk. Since OS does \n> > > not care which programs get swapped, it is possible that postgresql instance \n> > > can get swapped. That slows down effective memory access to knees..\n> > > \n> > > That's why for good performance, a serve should never swap..\n> > > \n> > > Bye\n> > > Shridhar\n> > > \n> > > --\n> > > Peterson's Admonition:\tWhen you think you're going down for the third time --\t\n> > > just remember that you may have counted wrong.\n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to [email protected])\n> > -- \n> > Wei Weng\n> > Network Software Engineer\n> > KenCast Inc.\n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> > \n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "14 Nov 2002 11:07:00 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" }, { "msg_contents": "you can use vmstat to measure swap activity. check the man page for your\nsystem.\n\nRobert Treat\n\nOn Thu, 2002-11-14 at 10:53, Wei Weng wrote:\n> How do you notice that if a system started swapping or not?\n> \n> Thanks\n> \n> On Thu, 2002-11-14 at 09:37, Shridhar Daithankar wrote:\n> > On 14 Nov 2002 at 10:30, Wei Weng wrote:\n> > \n> > > The term had been mentioned often enough on this mailing list. Can\n> > > someone enlighten me with some description or a URL where I can read on?\n> > > And why is it important to postgresql database performace?\n> > \n> > When programs request more memory than available, OS 'swaps' some memory to \n> > special area on disk and make the memory available. To programs, it gives \n> > appearance that nearly infinite memory is available.\n> > \n> > Unfortunately disk are hell slower than RAM and hence swapping slows things \n> > down as it takes much to swap in to disk and swap out of disk. Since OS does \n> > not care which programs get swapped, it is possible that postgresql instance \n> > can get swapped. That slows down effective memory access to knees..\n> > \n> > That's why for good performance, a serve should never swap..\n> > \n> > Bye\n> > Shridhar\n> > \n> > --\n> > Peterson's Admonition:\tWhen you think you're going down for the third time --\t\n> > just remember that you may have counted wrong.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> -- \n> Wei Weng\n> Network Software Engineer\n> KenCast Inc.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n", "msg_date": "14 Nov 2002 16:29:12 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" }, { "msg_contents": "\nSorry, a little OT, but anybody know the equivalent command for Mac OSX?\nThere doesn't seem to be a vmstat on my system\n\nThanks\n\nadam\n\n\n\n> you can use vmstat to measure swap activity. check the man page for your\n> system.\n> \n> Robert Treat\n> \n> On Thu, 2002-11-14 at 10:53, Wei Weng wrote:\n>> How do you notice that if a system started swapping or not?\n>> \n>> Thanks\n>> \n>> On Thu, 2002-11-14 at 09:37, Shridhar Daithankar wrote:\n>>> On 14 Nov 2002 at 10:30, Wei Weng wrote:\n>>> \n>>>> The term had been mentioned often enough on this mailing list. Can\n>>>> someone enlighten me with some description or a URL where I can read on?\n>>>> And why is it important to postgresql database performace?\n>>> \n>>> When programs request more memory than available, OS 'swaps' some memory to\n>>> special area on disk and make the memory available. To programs, it gives\n>>> appearance that nearly infinite memory is available.\n>>> \n>>> Unfortunately disk are hell slower than RAM and hence swapping slows things\n>>> down as it takes much to swap in to disk and swap out of disk. Since OS does\n>>> not care which programs get swapped, it is possible that postgresql instance\n>>> can get swapped. That slows down effective memory access to knees..\n>>> \n>>> That's why for good performance, a serve should never swap..\n>>> \n>>> Bye\n>>> Shridhar\n>>> \n>>> --\n>>> Peterson's Admonition: When you think you're going down for the third\ntime \n>>> -- \n>>> just remember that you may have counted wrong.\n>>> \n>>> \n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: you can get off all lists at once with the unregister command\n>>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>> -- \n>> Wei Weng\n>> Network Software Engineer\n>> KenCast Inc.\n>> \n>> \n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Fri, 15 Nov 2002 09:42:51 +0000", "msg_from": "Adam Witney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: swapping?" } ]
[ { "msg_contents": "Hi,\n\nWhy is the sort part of my query getting so much time?\n\nI run a relative complex query and it gets about 50 sec.\nFor sorting I need another 50 sec!\n\nCan I increase the sort memory for better performance?\nHow meny memory is needet for the sort in pg.\nThe same data readet in java and sorted cost 10 sec !\n\nAny idea about the pg tining?\n\nRegards,\nIvan.\n\n", "msg_date": "Thu, 14 Nov 2002 17:43:26 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Sort time" }, { "msg_contents": "On Thu, 14 Nov 2002, pginfo wrote:\n\n> Hi,\n> \n> Why is the sort part of my query getting so much time?\n> \n> I run a relative complex query and it gets about 50 sec.\n> For sorting I need another 50 sec!\n> \n> Can I increase the sort memory for better performance?\n> How meny memory is needet for the sort in pg.\n> The same data readet in java and sorted cost 10 sec !\n\nIncreasing sort_mem can help, but often the problem is that your query \nisn't optimal. If you'd like to post the explain analyze output of your \nquery, someone might have a hint on how to increase the efficiency of the \nquery.\n\n", "msg_date": "Thu, 14 Nov 2002 13:00:01 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\nThe sort mem is prety big at the moment.\nFor this tuning I use 256 MB for sort mem !\n\nThe explain plan is:\nEXPLAIN\ngibi=# explain analyze select\nS.IDS_NUM,S.OP,S.KOL,S.OTN_MED,S.CENA,S.DDS,S.KURS,S.TOT,S.DTO,S.PTO,S.DTON,MED.MNAME\nAS MEDNAME,N.MNAME AS NOMENNAME,N.NUM AS NNUM,S.PART,S.IZV,D.DATE_OP from\nA_DOC D , A_SKLAD S, A_NOMEN N ,A_MED MED WHERE S.FID=0 AND\nN.OSN_MED=MED.ID\nS AND S.IDS_NUM=N.IDS AND S.IDS_DOC=D.IDS ORDER BY S.IDS_NUM,S.PART,S.OP ;\nNOTICE: QUERY PLAN:\n\nSort (cost=100922.53..100922.53 rows=22330 width=215) (actual\ntime=111241.88..111735.33 rows=679743 loops=1)\n -> Hash Join (cost=9153.28..99309.52 rows=22330 width=215) (actual\ntime=3386.45..53065.59 rows=679743 loops=1)\n -> Hash Join (cost=2271.05..91995.05 rows=30620 width=198) (actual\ntime=2395.76..36710.54 rows=679743 loops=1)\n -> Seq Scan on a_sklad s (cost=0.00..84181.91 rows=687913\nwidth=111) (actual time=2111.30..22354.10 rows=679743 loops=1)\n -> Hash (cost=2256.59..2256.59 rows=5784 width=87) (actual\ntime=282.95..282.95 rows=0 loops=1)\n -> Hash Join (cost=2.52..2256.59 rows=5784 width=87)\n(actual time=132.54..270.29 rows=5784 loops=1)\n -> Seq Scan on a_nomen n (cost=0.00..2152.84\nrows=5784 width=74) (actual time=127.97..218.02 rows=5784 loops=1)\n -> Hash (cost=2.42..2.42 rows=42 width=13)\n(actual time=0.55..0.55 rows=0 loops=1)\n -> Seq Scan on a_med med (cost=0.00..2.42\nrows=42 width=13) (actual time=0.22..0.43 rows=42 loops=1)\n -> Hash (cost=6605.19..6605.19 rows=110819 width=17) (actual\ntime=987.26..987.26 rows=0 loops=1)\n -> Seq Scan on a_doc d (cost=0.00..6605.19 rows=110819\nwidth=17) (actual time=67.96..771.54 rows=109788 loops=1)\nTotal runtime: 112402.30 msec\n\nEXPLAIN\n\nAll IDS_XXX fields are varchar(20),S.PART is also varchar(20).\nAll tables are indexed.\n\nCan I change any parameters on my pg to increase the speed.\nIt looks very slow.\n\nOnly for test ( I do not need it) I executed:\nEXPLAIN\ngibi=# explain analyze select\nS.IDS_NUM,S.OP,S.KOL,S.OTN_MED,S.CENA,S.DDS,S.KURS,S.TOT,S.DTO,S.PTO,S.DTON,MED.MNAME\nAS MEDNAME,N.MNAME AS NOMENNAME,N.NUM AS NNUM,S.PART,S.IZV,D.DATE_OP from\nA_DOC D , A_SKLAD S, A_NOMEN N ,A_MED MED WHERE S.FID=0 AND\nN.OSN_MED=MED.ID\nS AND S.IDS_NUM=N.IDS AND S.IDS_DOC=D.IDS ORDER BY S.OP ;\nNOTICE: QUERY PLAN:\n\nSort (cost=100922.53..100922.53 rows=22330 width=215) (actual\ntime=62141.60..62598.05 rows=679743 loops=1)\n -> Hash Join (cost=9153.28..99309.52 rows=22330 width=215) (actual\ntime=9032.59..54703.33 rows=679743 loops=1)\n -> Hash Join (cost=2271.05..91995.05 rows=30620 width=198) (actual\ntime=8046.91..39132.91 rows=679743 loops=1)\n -> Seq Scan on a_sklad s (cost=0.00..84181.91 rows=687913\nwidth=111) (actual time=7790.01..25565.74 rows=679743 loops=1)\n -> Hash (cost=2256.59..2256.59 rows=5784 width=87) (actual\ntime=255.32..255.32 rows=0 loops=1)\n -> Hash Join (cost=2.52..2256.59 rows=5784 width=87)\n(actual time=123.40..243.02 rows=5784 loops=1)\n -> Seq Scan on a_nomen n (cost=0.00..2152.84\nrows=5784 width=74) (actual time=118.75..204.41 rows=5784 loops=1)\n -> Hash (cost=2.42..2.42 rows=42 width=13)\n(actual time=0.59..0.59 rows=0 loops=1)\n -> Seq Scan on a_med med (cost=0.00..2.42\nrows=42 width=13) (actual time=0.25..0.47 rows=42 loops=1)\n -> Hash (cost=6605.19..6605.19 rows=110819 width=17) (actual\ntime=982.22..982.22 rows=0 loops=1)\n -> Seq Scan on a_doc d (cost=0.00..6605.19 rows=110819\nwidth=17) (actual time=73.46..787.87 rows=109788 loops=1)\nTotal runtime: 63194.60 msec\n\nThe field S.OP is INT.\n\nIt is huge improvement when I sort by INT field, but I need to sort varchar\nfileds !\n\nIs this normal for pg to work so slow with varchar or I can change the setup.\n\nAlso I think the query time ( without sorting is big).\n\nregards and thanks in advance.\n\nscott.marlowe wrote:\n\n> On Thu, 14 Nov 2002, pginfo wrote:\n>\n> > Hi,\n> >\n> > Why is the sort part of my query getting so much time?\n> >\n> > I run a relative complex query and it gets about 50 sec.\n> > For sorting I need another 50 sec!\n> >\n> > Can I increase the sort memory for better performance?\n> > How meny memory is needet for the sort in pg.\n> > The same data readet in java and sorted cost 10 sec !\n>\n> Increasing sort_mem can help, but often the problem is that your query\n> isn't optimal. If you'd like to post the explain analyze output of your\n> query, someone might have a hint on how to increase the efficiency of the\n> query.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 07:08:42 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Sorry,\nI can post a little more info:\n\nI run the same query ( and receive the same result), but in this time I\nstarted vmstat 2, to see the system state.\nThe results:\n\ngibi=# explain analyze select\nS.IDS_NUM,S.OP,S.KOL,S.OTN_MED,S.CENA,S.DDS,S.KURS,S.TOT,S.DTO,S.PTO,S.DTON,MED.MNAME\nAS MEDNAME,N.MNAME AS NOMENNAME,N.NUM AS NNUM,S.PART,S.IZV,D.DATE_OP from\nA_DOC D , A_SKLAD S, A_NOMEN N ,A_MED MED WHERE S.FID=0 AND\nN.OSN_MED=MED.ID\nS AND S.IDS_NUM=N.IDS AND S.IDS_DOC=D.IDS ORDER BY S.IDS_NUM,S.PART,S.OP ;\nNOTICE: QUERY PLAN:\n\nSort (cost=100922.53..100922.53 rows=22330 width=215) (actual\ntime=109786.23..110231.74 rows=679743 loops=1)\n -> Hash Join (cost=9153.28..99309.52 rows=22330 width=215) (actual\ntime=12572.01..56330.28 rows=679743 loops=1)\n -> Hash Join (cost=2271.05..91995.05 rows=30620 width=198) (actual\ntime=7082.66..36482.57 rows=679743 loops=1)\n -> Seq Scan on a_sklad s (cost=0.00..84181.91 rows=687913\nwidth=111) (actual time=6812.81..23085.36 rows=679743 loops=1)\n -> Hash (cost=2256.59..2256.59 rows=5784 width=87) (actual\ntime=268.05..268.05 rows=0 loops=1)\n -> Hash Join (cost=2.52..2256.59 rows=5784 width=87)\n(actual time=125.25..255.48 rows=5784 loops=1)\n -> Seq Scan on a_nomen n (cost=0.00..2152.84\nrows=5784 width=74) (actual time=120.63..216.93 rows=5784 loops=1)\n -> Hash (cost=2.42..2.42 rows=42 width=13)\n(actual time=0.57..0.57 rows=0 loops=1)\n -> Seq Scan on a_med med (cost=0.00..2.42\nrows=42 width=13) (actual time=0.24..0.46 rows=42 loops=1)\n -> Hash (cost=6605.19..6605.19 rows=110819 width=17) (actual\ntime=5485.90..5485.90 rows=0 loops=1)\n -> Seq Scan on a_doc d (cost=0.00..6605.19 rows=110819\nwidth=17) (actual time=61.18..5282.99 rows=109788 loops=1)\nTotal runtime: 110856.36 msec\n\nEXPLAIN\n\n vmstat 2\n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us sy\nid\n 0 0 0 32104 196932 77404 948256 0 0 30 12 24 12 6 1\n27\n 0 1 1 32104 181792 77404 952416 0 0 2080 36 328 917 7 9\n84\n 0 1 0 32104 170392 77404 959584 0 0 3584 16 533 1271 5 4\n91\n 1 0 0 32104 162612 77404 965216 0 0 2816 0 514 1332 2 6\n92\n 1 0 0 32104 146832 77404 979956 0 0 7370 18 631 1741 5 16\n79\n 1 0 0 32104 129452 77404 997364 0 0 8704 0 719 1988 7 7\n86\n 0 2 1 32104 116016 77404 1010632 0 0 6634 8 563 1495 6 20\n74\n 1 0 0 32104 109844 77404 1013360 0 0 1364 2 228 584 31 24\n45\n 1 0 0 32104 101244 77404 1013364 0 0 2 0 103 219 43 11\n46\n 1 0 0 32104 84652 77404 1021328 0 0 3982 16 402 455 44 8\n49\n 3 0 0 32104 72916 77404 1024404 0 0 1538 0 294 215 44 5\n51\n 2 0 0 32104 63844 77404 1024404 0 0 0 10 103 222 47 3\n50\n 1 0 0 32104 54600 77404 1024404 0 0 0 0 102 222 55 6\n39\n 1 0 0 32104 45472 77404 1024404 0 0 0 0 102 220 45 6\n50\n 1 0 0 32104 36060 77404 1024404 0 0 0 10 103 215 45 5\n50\n 2 0 0 32104 26640 77404 1024404 0 0 0 0 106 218 43 7\n50\n 2 0 0 32104 17440 77404 1024404 0 0 0 10 148 253 46 6\n48\n 1 0 0 32104 10600 77404 1022004 0 0 0 0 102 215 42 8\n50\n 1 0 0 32104 10604 77404 1013900 0 0 0 0 103 212 41 9\n50\n 1 0 0 32104 10600 77404 1006452 0 0 0 26 106 225 38 12\n50\n 2 0 0 32104 10600 77404 997412 0 0 0 0 102 213 48 3\n50\n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us sy\nid\n 1 0 0 32104 10572 77428 988936 0 0 340 118 214 455 62 8\n29\n 1 0 0 32104 10532 77432 979872 0 0 642 124 307 448 70 12\n18\n 1 0 0 32104 10516 77432 970316 0 0 0 0 102 238 49 6\n45\n 1 0 0 32104 10508 77432 960880 0 0 0 46 105 224 50 5\n45\n 1 0 0 32104 10500 77432 951740 0 0 3398 34 174 445 47 9\n44\n 1 0 1 32104 10112 77432 943588 0 0 8192 94 289 544 50 12\n39\n 1 0 0 32104 10484 77432 937204 0 0 16896 0 386 1698 37 20\n43\n 2 0 0 32104 10484 77432 930004 0 0 14080 0 345 1415 39 17\n45\n 3 0 0 32104 27976 77432 925592 0 0 1844 16 136 329 46 6\n49\n 2 0 0 32104 27924 77432 925592 0 0 0 0 104 220 50 0\n49\n 2 0 0 32104 27756 77436 925592 0 0 0 8 103 222 51 2\n47\n 1 0 0 32104 27756 77436 925592 0 0 0 0 102 222 54 1\n45\n 1 0 0 32104 27756 77436 925592 0 0 0 0 102 220 55 0\n45\n 1 0 0 32104 27424 77436 925592 0 0 0 24 104 224 54 1\n45\n 1 0 0 32104 27424 77436 925592 0 0 0 0 102 218 55 0\n45\n 3 0 0 32104 27424 77436 925592 0 0 0 8 103 221 55 0\n45\n 1 0 0 32104 27424 77436 925592 0 0 0 0 103 222 55 0\n45\n 1 0 0 32104 27456 77436 925592 0 0 0 0 104 222 55 0\n45\n 1 0 0 32104 27456 77436 925592 0 0 0 8 104 222 55 0\n45\n 2 0 0 32104 26792 77436 925592 0 0 0 0 102 218 55 1\n44\n 2 0 0 32104 26792 77436 925592 0 0 0 8 103 222 55 0\n44\n procs memory swap io system\ncpu\n r b w swpd free buff cache si so bi bo in cs us sy\nid\n 2 0 0 32104 26792 77436 925592 0 0 0 0 102 221 66 0\n33\n 1 0 0 32104 26792 77436 925592 0 0 0 0 103 221 55 0\n44\n 1 0 0 32104 26792 77436 925592 0 0 0 8 103 219 55 0\n44\n 1 0 0 32104 26792 77436 925592 0 0 0 0 104 221 56 0\n44\n 2 0 0 32104 26792 77436 925592 0 0 0 8 105 223 56 0\n44\n 1 0 0 32104 26792 77436 925592 0 0 0 0 102 222 56 0\n44\n 1 0 0 32104 26792 77436 925592 0 0 0 8 106 223 55 1\n44\n 1 0 0 32104 26792 77436 925592 0 0 0 0 102 216 56 0\n44\n 2 0 0 32104 26792 77436 925592 0 0 0 0 102 221 56 0\n43\n 2 0 0 32104 26628 77436 925592 0 0 0 26 106 230 57 0\n43\n 1 0 0 32104 26768 77440 925592 0 0 0 12 104 228 57 0\n43\n 1 0 0 32104 26760 77448 925592 0 0 0 30 106 226 56 1\n43\n 2 0 0 32104 26168 77448 925592 0 0 0 0 102 221 57 0\n43\n 1 0 0 32104 28088 77448 925592 0 0 0 0 103 220 46 12\n42\n\nCan I tune better my linux box or pq to get faster execution?\n\nregards.\n\n\n\nscott.marlowe wrote:\n\n> On Thu, 14 Nov 2002, pginfo wrote:\n>\n> > Hi,\n> >\n> > Why is the sort part of my query getting so much time?\n> >\n> > I run a relative complex query and it gets about 50 sec.\n> > For sorting I need another 50 sec!\n> >\n> > Can I increase the sort memory for better performance?\n> > How meny memory is needet for the sort in pg.\n> > The same data readet in java and sorted cost 10 sec !\n>\n> Increasing sort_mem can help, but often the problem is that your query\n> isn't optimal. If you'd like to post the explain analyze output of your\n> query, someone might have a hint on how to increase the efficiency of the\n> query.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 07:29:21 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi Tom,\n\nI use unicode for my db, but the locale is US!\nThe unicode is only for non english varchar and I do not make any comparation or\nsorts or joins based\non non english fields ( all this is made in the client part of the system).\n\nWhat locale will be fast?\nHave you any info about the speed in the faster locale and in INT?\n\nregards.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > It is huge improvement when I sort by INT field, but I need to sort varchar\n> > fileds !\n>\n> What locale are you using? strcoll() comparisons can be awfully slow in\n> some locales.\n>\n> regards, tom lane\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 15:45:53 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Ok,\nThanks!\n\nHave any one anoder idea?\nregards.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > What locale will be fast?\n>\n> C locale (a/k/a POSIX locale) should be quick. Not sure about anything\n> else.\n>\n> regards, tom lane\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 16:10:31 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "pginfo <[email protected]> writes:\n> It is huge improvement when I sort by INT field, but I need to sort varchar\n> fileds !\n\nWhat locale are you using? strcoll() comparisons can be awfully slow in\nsome locales.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 10:31:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "pginfo <[email protected]> writes:\n> What locale will be fast?\n\nC locale (a/k/a POSIX locale) should be quick. Not sure about anything\nelse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 10:59:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "Hi,\nYes I have indexes on all this fields.\nAlso I vacuumed and that is the result after it.\nActualy I do not see what bad in query execution. The problem is in sort\ntime!\n\nregards.\n\nJosh Berkus wrote:\n\n> Pginfo,\n>\n> > Sort (cost=100922.53..100922.53 rows=22330 width=215) (actual\n> > time=109786.23..110231.74 rows=679743 loops=1)\n> > -> Hash Join (cost=9153.28..99309.52 rows=22330 width=215)\n> > (actual\n> > time=12572.01..56330.28 rows=679743 loops=1)\n> > -> Hash Join (cost=2271.05..91995.05 rows=30620 width=198)\n> > (actual\n> > time=7082.66..36482.57 rows=679743 loops=1)\n> > -> Seq Scan on a_sklad s (cost=0.00..84181.91\n> > rows=687913\n> > width=111) (actual time=6812.81..23085.36 rows=679743 loops=1)\n> > -> Hash (cost=2256.59..2256.59 rows=5784 width=87)\n> > (actual\n> > time=268.05..268.05 rows=0 loops=1)\n> > -> Hash Join (cost=2.52..2256.59 rows=5784\n> > width=87)\n> > (actual time=125.25..255.48 rows=5784 loops=1)\n> > -> Seq Scan on a_nomen n\n> > (cost=0.00..2152.84\n> > rows=5784 width=74) (actual time=120.63..216.93 rows=5784 loops=1)\n> > -> Hash (cost=2.42..2.42 rows=42\n> > width=13)\n> > (actual time=0.57..0.57 rows=0 loops=1)\n> > -> Seq Scan on a_med med\n> > (cost=0.00..2.42\n> > rows=42 width=13) (actual time=0.24..0.46 rows=42 loops=1)\n> > -> Hash (cost=6605.19..6605.19 rows=110819 width=17)\n> > (actual\n> > time=5485.90..5485.90 rows=0 loops=1)\n> > -> Seq Scan on a_doc d (cost=0.00..6605.19\n> > rows=110819\n> > width=17) (actual time=61.18..5282.99 rows=109788 loops=1)\n> > Total runtime: 110856.36 msec\n>\n> Pardon me if we've been over this ground, but that's a *lot* of seq\n> scans for this query. It seems odd that there's not *one* index scan.\n>\n> Have you tried indexing *all* of the following fields?\n> S.FID\n> N.OSN_MED\n> S.IDS_NUM\n> N.IDS\n> S.IDS_DOC\n> D.IDS\n> (check to avoid duplicate indexes. don't forget to VACUUM ANALYZE\n> after you index)\n>\n> -Josh Berkus\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 17:22:28 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Pginfo,\n\n> Sort (cost=100922.53..100922.53 rows=22330 width=215) (actual\n> time=109786.23..110231.74 rows=679743 loops=1)\n> -> Hash Join (cost=9153.28..99309.52 rows=22330 width=215)\n> (actual\n> time=12572.01..56330.28 rows=679743 loops=1)\n> -> Hash Join (cost=2271.05..91995.05 rows=30620 width=198)\n> (actual\n> time=7082.66..36482.57 rows=679743 loops=1)\n> -> Seq Scan on a_sklad s (cost=0.00..84181.91\n> rows=687913\n> width=111) (actual time=6812.81..23085.36 rows=679743 loops=1)\n> -> Hash (cost=2256.59..2256.59 rows=5784 width=87)\n> (actual\n> time=268.05..268.05 rows=0 loops=1)\n> -> Hash Join (cost=2.52..2256.59 rows=5784\n> width=87)\n> (actual time=125.25..255.48 rows=5784 loops=1)\n> -> Seq Scan on a_nomen n\n> (cost=0.00..2152.84\n> rows=5784 width=74) (actual time=120.63..216.93 rows=5784 loops=1)\n> -> Hash (cost=2.42..2.42 rows=42\n> width=13)\n> (actual time=0.57..0.57 rows=0 loops=1)\n> -> Seq Scan on a_med med\n> (cost=0.00..2.42\n> rows=42 width=13) (actual time=0.24..0.46 rows=42 loops=1)\n> -> Hash (cost=6605.19..6605.19 rows=110819 width=17)\n> (actual\n> time=5485.90..5485.90 rows=0 loops=1)\n> -> Seq Scan on a_doc d (cost=0.00..6605.19\n> rows=110819\n> width=17) (actual time=61.18..5282.99 rows=109788 loops=1)\n> Total runtime: 110856.36 msec\n\nPardon me if we've been over this ground, but that's a *lot* of seq\nscans for this query. It seems odd that there's not *one* index scan.\n\nHave you tried indexing *all* of the following fields?\nS.FID\nN.OSN_MED\nS.IDS_NUM\nN.IDS\nS.IDS_DOC\nD.IDS\n(check to avoid duplicate indexes. don't forget to VACUUM ANALYZE\nafter you index)\n\n-Josh Berkus\n\n\n\n\n\n\n", "msg_date": "Fri, 15 Nov 2002 09:12:55 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "Pginfo,\n\n> Yes I have indexes on all this fields.\n> Also I vacuumed and that is the result after it.\n> Actualy I do not see what bad in query execution. The problem is in\n> sort\n> time!\n\nHmmm... I don't understand. The way I read the EXPLAIN, the sort is\nonly taking a few seconds. Am I missing something, here? \n\nAnd that's \"VACUUM FULL ANALYZE\", not just \"VACUUM\", yes?\n\nIf all of the above has been tried, what happens to the query when you\nset enable_seqscan=off?\n\n-Josh Berkus\n", "msg_date": "Fri, 15 Nov 2002 09:33:47 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Fri, 2002-11-15 at 12:33, Josh Berkus wrote:\n> Pginfo,\n> \n> > Yes I have indexes on all this fields.\n> > Also I vacuumed and that is the result after it.\n> > Actualy I do not see what bad in query execution. The problem is in\n> > sort\n> > time!\n> \n> Hmmm... I don't understand. The way I read the EXPLAIN, the sort is\n> only taking a few seconds. Am I missing something, here? \n\nThe estimated cost had the sort at a few seconds, but the actual times\nshow it is taking 50% of the total query time.\n\nThe big problem is he's sorting by a varchar() which isn't overly quick\nno matter what locale. Integers are nice and quick (s.OP is an int,\nwhich shows this).\n\nIf IDS_NUM is a number, he could try casting it to an int8, but without\ndata examples I couldn't say.\n\n-- \nRod Taylor <[email protected]>\n", "msg_date": "15 Nov 2002 13:48:23 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "\nRod,\n\n> The estimated cost had the sort at a few seconds, but the actual times\n> show it is taking 50% of the total query time.\n\nD'oh! I was, of course, subtracting the estimated from the actual time. \nOops.\n\n> \n> The big problem is he's sorting by a varchar() which isn't overly quick\n> no matter what locale. Integers are nice and quick (s.OP is an int,\n> which shows this).\n> \n> If IDS_NUM is a number, he could try casting it to an int8, but without\n> data examples I couldn't say.\n\nHmmm ... how big *is* that varchar field? 8 characters gives us about 6mb for \nthe column. Of course, if it's a 128-char global unque id, that;s a bit \nlarger.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 15 Nov 2002 11:27:29 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Fri, 2002-11-15 at 14:27, Josh Berkus wrote:\n> > The big problem is he's sorting by a varchar() which isn't overly quick\n> > no matter what locale. Integers are nice and quick (s.OP is an int,\n> > which shows this).\n> > \n> > If IDS_NUM is a number, he could try casting it to an int8, but without\n> > data examples I couldn't say.\n> \n> Hmmm ... how big *is* that varchar field? 8 characters gives us about 6mb for \n> the column. Of course, if it's a 128-char global unque id, that;s a bit \n> larger.\n\n20 characters long in the Unicode locale -- which is 40 bytes?\n--\nRod Taylor <[email protected]>\n", "msg_date": "15 Nov 2002 14:28:39 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "\nRod,\n\n> > Hmmm ... how big *is* that varchar field? 8 characters gives us about 6mb \nfor \n> > the column. Of course, if it's a 128-char global unque id, that;s a bit \n> > larger.\n> \n> 20 characters long in the Unicode locale -- which is 40 bytes?\n\nWell, 40+, probably about 43. Should be about 29mb, yes?\nHere's a question: is the total size of the column a good indicator of the \nsort_mem required? Or does the rowsize affect it somehow?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 15 Nov 2002 13:18:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Fri, 2002-11-15 at 16:18, Josh Berkus wrote:\n> Rod,\n> \n> > > Hmmm ... how big *is* that varchar field? 8 characters gives us about 6mb \n> for \n> > > the column. Of course, if it's a 128-char global unque id, that;s a bit \n> > > larger.\n> > \n> > 20 characters long in the Unicode locale -- which is 40 bytes?\n> \n> Well, 40+, probably about 43. Should be about 29mb, yes?\n> Here's a question: is the total size of the column a good indicator of the \n> sort_mem required? Or does the rowsize affect it somehow?\n\nI'd suspect the total row is sorted, especially in this case where he's\nsorting more than one attribute.\n\n-- \nRod Taylor <[email protected]>\n", "msg_date": "15 Nov 2002 17:40:19 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n> ... don't forget to VACUUM ANALYZE after you index ...\n\nPeople keep saying that, but it's a myth. ANALYZE doesn't care what\nindexes are present; adding or deleting an index doesn't invalidate\nprevious ANALYZE results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 18:02:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "ANALYZE and indexes (was Re: Sort time)" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Here's a question: is the total size of the column a good indicator of the \n> sort_mem required? Or does the rowsize affect it somehow?\n\nIt will include all the data that's supposed to be output by the sort...\nboth the key column(s) and the others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Nov 2002 18:32:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "Hi,\n\nRod Taylor wrote:\n\n> On Fri, 2002-11-15 at 16:18, Josh Berkus wrote:\n> > Rod,\n> >\n> > > > Hmmm ... how big *is* that varchar field? 8 characters gives us about 6mb\n> > for\n> > > > the column. Of course, if it's a 128-char global unque id, that;s a bit\n> > > > larger.\n> > >\n> > > 20 characters long in the Unicode locale -- which is 40 bytes?\n> >\n> > Well, 40+, probably about 43. Should be about 29mb, yes?\n> > Here's a question: is the total size of the column a good indicator of the\n> > sort_mem required? Or does the rowsize affect it somehow?\n>\n> I'd suspect the total row is sorted, especially in this case where he's\n> sorting more than one attribute.\n>\n\nI think that total the row is sorted.I do not know hoe is sorting in pg working and\nwhy so slow,\nbut I tested all this in java ( in C is much quicker)\nand the make this:\n1. Read all data in memory defined as ArrayList from structure of data.\n2. make comparator with unicode string compare.\n3. Execute sort (all in memory)\n\nThe sort take 2-4 sek for all this rows!!!\nIt is good as performance.\nThe question is : Why is it in ps so slow?\nSorting is normal think for db!\nAlso I have 256 MB for sort mem and this was the only executing query at the moment.\n\nI know that if the fields are INT all will work better, but we migrate this\napplication from oracle\nand the fields in oracle was varchar.\nWe do not have any performance problems with oracle and this data.\nAlso one part from users will continue to work with oracle and exchange ( import and\nexport) data\nto the pg systems.\n\n> --\n> Rod Taylor <[email protected]>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Sat, 16 Nov 2002 07:20:35 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\n\nTom Lane wrote:\n\n> Josh Berkus <[email protected]> writes:\n> > Here's a question: is the total size of the column a good indicator of the\n> > sort_mem required? Or does the rowsize affect it somehow?\n>\n> It will include all the data that's supposed to be output by the sort...\n> both the key column(s) and the others.\n>\n\nHmm it is not clear for me.Let we have all data.\nIf I make sort by S.OP ( it is INT) it take < 6 sek for sort.\nI think we move all this data anly the number of comparation is by INT. I think\nthe number of comparation\nis ~ n * ln(n).\nIf we sort by S.IDS_xxx we have also n*ln(n) comparations but in\nvarchar(string).\nI don't think that it can take 50 sek.\n\nIs it not so?\n\nregards,\nivan.\n\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n", "msg_date": "Sat, 16 Nov 2002 07:36:52 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Sat, 16 Nov 2002, pginfo wrote:\n\n> Hi,\n>\n> Tom Lane wrote:\n>\n> > Josh Berkus <[email protected]> writes:\n> > > Here's a question: is the total size of the column a good indicator of the\n> > > sort_mem required? Or does the rowsize affect it somehow?\n> >\n> > It will include all the data that's supposed to be output by the sort...\n> > both the key column(s) and the others.\n> >\n>\n> Hmm it is not clear for me.Let we have all data.\n> If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> I think we move all this data anly the number of comparation is by INT. I think\n> the number of comparation\n> is ~ n * ln(n).\n> If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> varchar(string).\n> I don't think that it can take 50 sek.\n>\n> Is it not so?\n\nHave you tried setting up another database in \"C\" locale and compared the\ntimings there? I'd wonder if maybe there's some extra copying going on\ngiven the comments in varstr_cmp.\n\n", "msg_date": "Sat, 16 Nov 2002 09:15:11 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\n\nStephan Szabo wrote:\n\n> On Sat, 16 Nov 2002, pginfo wrote:\n>\n> > Hi,\n> >\n> > Tom Lane wrote:\n> >\n> > > Josh Berkus <[email protected]> writes:\n> > > > Here's a question: is the total size of the column a good indicator of the\n> > > > sort_mem required? Or does the rowsize affect it somehow?\n> > >\n> > > It will include all the data that's supposed to be output by the sort...\n> > > both the key column(s) and the others.\n> > >\n> >\n> > Hmm it is not clear for me.Let we have all data.\n> > If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> > I think we move all this data anly the number of comparation is by INT. I think\n> > the number of comparation\n> > is ~ n * ln(n).\n> > If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> > varchar(string).\n> > I don't think that it can take 50 sek.\n> >\n> > Is it not so?\n>\n> Have you tried setting up another database in \"C\" locale and compared the\n> timings there? I'd wonder if maybe there's some extra copying going on\n> given the comments in varstr_cmp.\n\nNo, I do not have any info about it.I will see if it is possible ( the data are not\nso simple).\nIf it is possible I will make the tests.\nHave no one that have 700K row in thow tables?\nIt is simple to test:\n1. Run query that returns ~700K rows from this tables.\n2. Make sort.\n\nIt is interest only the sort time!\n\nregards,\nIvan.\n\n\n", "msg_date": "Sun, 17 Nov 2002 07:30:28 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\n\nStephan Szabo wrote:\n\n> On Sun, 17 Nov 2002, pginfo wrote:\n>\n> > Hi,\n> >\n> > Stephan Szabo wrote:\n> >\n> > > On Sat, 16 Nov 2002, pginfo wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > Tom Lane wrote:\n> > > >\n> > > > > Josh Berkus <[email protected]> writes:\n> > > > > > Here's a question: is the total size of the column a good indicator of the\n> > > > > > sort_mem required? Or does the rowsize affect it somehow?\n> > > > >\n> > > > > It will include all the data that's supposed to be output by the sort...\n> > > > > both the key column(s) and the others.\n> > > > >\n> > > >\n> > > > Hmm it is not clear for me.Let we have all data.\n> > > > If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> > > > I think we move all this data anly the number of comparation is by INT. I think\n> > > > the number of comparation\n> > > > is ~ n * ln(n).\n> > > > If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> > > > varchar(string).\n> > > > I don't think that it can take 50 sek.\n> > > >\n> > > > Is it not so?\n> > >\n> > > Have you tried setting up another database in \"C\" locale and compared the\n> > > timings there? I'd wonder if maybe there's some extra copying going on\n> > > given the comments in varstr_cmp.\n> >\n> > No, I do not have any info about it.I will see if it is possible ( the data are not\n> > so simple).\n> > If it is possible I will make the tests.\n> > Have no one that have 700K row in thow tables?\n> > It is simple to test:\n> > 1. Run query that returns ~700K rows from this tables.\n> > 2. Make sort.\n> >\n> > It is interest only the sort time!\n>\n> I can make a table of 700k rows and test it (and am generating 700k of\n> random varchar rows), but I wouldn't hold great hope that this is\n> necessarily a valid test since possibly any of OS, configuration settings\n> and actual data (width and values) might have an effect on the results.\n>\n\nIt is so.But the info will help.\nIf the sort time is 5-6 sek.(by me it is 50 sek) I will work on config and OS settings.\nI am uning RH 7.3 at the moment. If anoder OS will have better performance I will make\nthe change.\nBut if the sort time is ~50 sek in any OS and config the problem will be in pg and I will\nstart to think about to\nrewrite the sort part of src or migrate to anoder db(mysql or SAPdb. On oracle we have\nsuper performance in sorting at the moment, but the idea is to move\nthe project to pg).\n\nI think the sort is very important for any db.\n\nAlso it will be possible for me (in 1-2 days ) to install anoder box for tests and give\naccess to some one that can see the problem.\nBut as beginning it will be great to have more info about sort test results.\n\nIf any one have better idea I am ready to discuse it.\n\nregards,\nIvan.\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Sun, 17 Nov 2002 08:29:21 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Sun, 17 Nov 2002, pginfo wrote:\n\n> Hi,\n>\n> Stephan Szabo wrote:\n>\n> > On Sat, 16 Nov 2002, pginfo wrote:\n> >\n> > > Hi,\n> > >\n> > > Tom Lane wrote:\n> > >\n> > > > Josh Berkus <[email protected]> writes:\n> > > > > Here's a question: is the total size of the column a good indicator of the\n> > > > > sort_mem required? Or does the rowsize affect it somehow?\n> > > >\n> > > > It will include all the data that's supposed to be output by the sort...\n> > > > both the key column(s) and the others.\n> > > >\n> > >\n> > > Hmm it is not clear for me.Let we have all data.\n> > > If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> > > I think we move all this data anly the number of comparation is by INT. I think\n> > > the number of comparation\n> > > is ~ n * ln(n).\n> > > If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> > > varchar(string).\n> > > I don't think that it can take 50 sek.\n> > >\n> > > Is it not so?\n> >\n> > Have you tried setting up another database in \"C\" locale and compared the\n> > timings there? I'd wonder if maybe there's some extra copying going on\n> > given the comments in varstr_cmp.\n>\n> No, I do not have any info about it.I will see if it is possible ( the data are not\n> so simple).\n> If it is possible I will make the tests.\n> Have no one that have 700K row in thow tables?\n> It is simple to test:\n> 1. Run query that returns ~700K rows from this tables.\n> 2. Make sort.\n>\n> It is interest only the sort time!\n\nI can make a table of 700k rows and test it (and am generating 700k of\nrandom varchar rows), but I wouldn't hold great hope that this is\nnecessarily a valid test since possibly any of OS, configuration settings\nand actual data (width and values) might have an effect on the results.\n\n", "msg_date": "Sat, 16 Nov 2002 23:44:38 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\n\nStephan Szabo wrote:\n\n> On Sat, 16 Nov 2002, Stephan Szabo wrote:\n>\n> > On Sun, 17 Nov 2002, pginfo wrote:\n> >\n> > > Hi,\n> > >\n> > > Stephan Szabo wrote:\n> > >\n> > > > On Sat, 16 Nov 2002, pginfo wrote:\n> > > >\n> > > > > Hi,\n> > > > >\n> > > > > Tom Lane wrote:\n> > > > >\n> > > > > > Josh Berkus <[email protected]> writes:\n> > > > > > > Here's a question: is the total size of the column a good indicator of the\n> > > > > > > sort_mem required? Or does the rowsize affect it somehow?\n> > > > > >\n> > > > > > It will include all the data that's supposed to be output by the sort...\n> > > > > > both the key column(s) and the others.\n> > > > > >\n> > > > >\n> > > > > Hmm it is not clear for me.Let we have all data.\n> > > > > If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> > > > > I think we move all this data anly the number of comparation is by INT. I think\n> > > > > the number of comparation\n> > > > > is ~ n * ln(n).\n> > > > > If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> > > > > varchar(string).\n> > > > > I don't think that it can take 50 sek.\n> > > > >\n> > > > > Is it not so?\n> > > >\n> > > > Have you tried setting up another database in \"C\" locale and compared the\n> > > > timings there? I'd wonder if maybe there's some extra copying going on\n> > > > given the comments in varstr_cmp.\n> > >\n> > > No, I do not have any info about it.I will see if it is possible ( the data are not\n> > > so simple).\n> > > If it is possible I will make the tests.\n> > > Have no one that have 700K row in thow tables?\n> > > It is simple to test:\n> > > 1. Run query that returns ~700K rows from this tables.\n> > > 2. Make sort.\n> > >\n> > > It is interest only the sort time!\n> >\n> > I can make a table of 700k rows and test it (and am generating 700k of\n> > random varchar rows), but I wouldn't hold great hope that this is\n> > necessarily a valid test since possibly any of OS, configuration settings\n> > and actual data (width and values) might have an effect on the results.\n>\n> On my not terribly powerful or memory filled box, I got a time of about\n> 16s after going through a couple iterations of raising sort_mem and\n> watching if it made temp files (which is probably a good idea to check as\n> well). The data size ended up being in the vicinity of 100 meg in my\n> case.\n\nThe time is very good!\nIt is very good idea to watch the temp files.\nI started the sort_mem to 32 mb (it is 256 on the production system)\nand I see 3 temp files. The first is ~ 1.8 mb. The second is ~55 mb and the last is ~150\nmb.\n\nAlso I removed the bigest as size fileds from my query but got only litle improvemen.\n\nregards,\nivan.\n\n", "msg_date": "Sun, 17 Nov 2002 09:16:01 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "\nOn Sat, 16 Nov 2002, Stephan Szabo wrote:\n\n> On Sun, 17 Nov 2002, pginfo wrote:\n>\n> > Hi,\n> >\n> > Stephan Szabo wrote:\n> >\n> > > On Sat, 16 Nov 2002, pginfo wrote:\n> > >\n> > > > Hi,\n> > > >\n> > > > Tom Lane wrote:\n> > > >\n> > > > > Josh Berkus <[email protected]> writes:\n> > > > > > Here's a question: is the total size of the column a good indicator of the\n> > > > > > sort_mem required? Or does the rowsize affect it somehow?\n> > > > >\n> > > > > It will include all the data that's supposed to be output by the sort...\n> > > > > both the key column(s) and the others.\n> > > > >\n> > > >\n> > > > Hmm it is not clear for me.Let we have all data.\n> > > > If I make sort by S.OP ( it is INT) it take < 6 sek for sort.\n> > > > I think we move all this data anly the number of comparation is by INT. I think\n> > > > the number of comparation\n> > > > is ~ n * ln(n).\n> > > > If we sort by S.IDS_xxx we have also n*ln(n) comparations but in\n> > > > varchar(string).\n> > > > I don't think that it can take 50 sek.\n> > > >\n> > > > Is it not so?\n> > >\n> > > Have you tried setting up another database in \"C\" locale and compared the\n> > > timings there? I'd wonder if maybe there's some extra copying going on\n> > > given the comments in varstr_cmp.\n> >\n> > No, I do not have any info about it.I will see if it is possible ( the data are not\n> > so simple).\n> > If it is possible I will make the tests.\n> > Have no one that have 700K row in thow tables?\n> > It is simple to test:\n> > 1. Run query that returns ~700K rows from this tables.\n> > 2. Make sort.\n> >\n> > It is interest only the sort time!\n>\n> I can make a table of 700k rows and test it (and am generating 700k of\n> random varchar rows), but I wouldn't hold great hope that this is\n> necessarily a valid test since possibly any of OS, configuration settings\n> and actual data (width and values) might have an effect on the results.\n\nOn my not terribly powerful or memory filled box, I got a time of about\n16s after going through a couple iterations of raising sort_mem and\nwatching if it made temp files (which is probably a good idea to check as\nwell). The data size ended up being in the vicinity of 100 meg in my\ncase.\n\n\n", "msg_date": "Sun, 17 Nov 2002 00:18:22 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Sun, 17 Nov 2002, pginfo wrote:\n\n> > On my not terribly powerful or memory filled box, I got a time of about\n> > 16s after going through a couple iterations of raising sort_mem and\n> > watching if it made temp files (which is probably a good idea to check as\n> > well). The data size ended up being in the vicinity of 100 meg in my\n> > case.\n>\n> The time is very good!\n> It is very good idea to watch the temp files.\n> I started the sort_mem to 32 mb (it is 256 on the production system)\n> and I see 3 temp files. The first is ~ 1.8 mb. The second is ~55 mb and the last is ~150\n> mb.\n\nAs a note, the same data loaded into a non-\"C\" locale database took about\n42 seconds on the same machine, approximately 2.5x as long.\n\n", "msg_date": "Sun, 17 Nov 2002 09:29:50 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> As a note, the same data loaded into a non-\"C\" locale database took about\n> 42 seconds on the same machine, approximately 2.5x as long.\n\nThe non-C locale is undoubtedly the problem. I made a test dataset of\n700000 all-alphabetic 20-character random strings:\n\n$ head rand.data\nduofoesrlycdnilvlcrg\ncrealjdrjpyczfbnlouo\nlxaiyicslwjnxgpehtzp\nykizuovkvpkvvqsaocys\nrkkvrqfiiybczwqdvvfu\nstonxhbbvgwtjszodguv\nprqxhwcfibiopjpiddud\nubgexbfdodhnauytebcf\nurfoqifgbrladpssrwzw\nydcrsnxjpxospfqqoilw\n\nI performed the following experiment in 7.3 using a database in\nen_US locale, SQL_ASCII encoding:\n\nenus=# create table vc20 (f1 varchar(20));\nCREATE TABLE\nenus=# \\copy vc20 from rand.data\n\\.\nenus=# vacuum analyze vc20;\nVACUUM\nenus=# set sort_mem to 50000;\nSET\nenus=# explain analyze select count(*) from\nenus-# (select * from vc20 order by f1) ss;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=83607.48..83607.48 rows=1 width=24) (actual time=1058167.66..1058167.67 rows=1 loops=1)\n -> Subquery Scan ss (cost=80107.48..81857.48 rows=700000 width=24) (actual time=1022972.86..1049559.50 rows=700000 loops=1)\n -> Sort (cost=80107.48..81857.48 rows=700000 width=24) (actual time=1022972.80..1034036.58 rows=700000 loops=1)\n Sort Key: f1\n -> Seq Scan on vc20 (cost=0.00..12148.00 rows=700000 width=24) (actual time=0.20..24651.65 rows=700000 loops=1)\n Total runtime: 1058220.10 msec\n(6 rows)\n\n(The point of the select count(*) was to avoid shipping the result rows\nto the client, but in hindsight \"explain analyze\" would suppress that\nanyway. But the main datapoint here is the time for the Sort step.)\n\nI tried the test using datatype NAME as well, since it sorts using\nplain strcmp() instead of strcoll():\n\nenus=# create table nm (f1 name);\nCREATE TABLE\nenus=# insert into nm select f1 from vc20;\nINSERT 0 700000\nenus=# vacuum analyze nm;\nVACUUM\nenus=# set sort_mem to 50000;\nSET\nenus=# explain analyze select count(*) from\nenus-# (select * from nm order by f1) ss;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=122701.48..122701.48 rows=1 width=64) (actual time=157877.84..157877.85 rows=1 loops=1)\n -> Subquery Scan ss (cost=119201.48..120951.48 rows=700000 width=64) (actual time=121286.65..149376.93 rows=700000 loops=1)\n -> Sort (cost=119201.48..120951.48 rows=700000 width=64) (actual time=121286.60..134075.61 rows=700000 loops=1)\n Sort Key: f1\n -> Seq Scan on nm (cost=0.00..15642.00 rows=700000 width=64) (actual time=0.21..24150.57 rows=700000 loops=1)\n Total runtime: 157962.79 msec\n(6 rows)\n\nIn C locale, the identical test sequence gives\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=83607.48..83607.48 rows=1 width=24) (actual time=187480.70..187480.71 rows=1 loops=1)\n -> Subquery Scan ss (cost=80107.48..81857.48 rows=700000 width=24) (actual time=141100.03..178625.97 rows=700000 loops=1)\n -> Sort (cost=80107.48..81857.48 rows=700000 width=24) (actual time=141099.98..162288.95 rows=700000 loops=1)\n Sort Key: f1\n -> Seq Scan on vc20 (cost=0.00..12148.00 rows=700000 width=24) (actual time=0.20..23954.71 rows=700000 loops=1)\n Total runtime: 187565.79 msec\n(6 rows)\n\nand of course about the same runtime as before for datatype NAME. So on\nthis platform (HPUX 10.20), en_US locale incurs about a 6x penalty over\nC locale for sorting varchars.\n\n\nNote that NAME beats VARCHAR by a noticeable margin even in C locale,\ndespite the handicap of requiring much more I/O (being 64 bytes per row\nnot 24). This surprises me; it looks like varstr_cmp() is reasonably\nwell optimized in the C-locale case. But the real loser is VARCHAR in\nnon-C locales. I suspect the primary time sink is strcoll() not the\npalloc/copy overhead in varstr_cmp(), but don't have time right now to\ndo profiling to prove it.\n\n\nAnyway, use of NAME instead of VARCHAR might be a workable workaround\nif you cannot change your database locale to C.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 15:56:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "Stephan Szabo kirjutas P, 17.11.2002 kell 22:29:\n> On Sun, 17 Nov 2002, pginfo wrote:\n> \n> > > On my not terribly powerful or memory filled box, I got a time of about\n> > > 16s after going through a couple iterations of raising sort_mem and\n> > > watching if it made temp files (which is probably a good idea to check as\n> > > well). The data size ended up being in the vicinity of 100 meg in my\n> > > case.\n> >\n> > The time is very good!\n> > It is very good idea to watch the temp files.\n> > I started the sort_mem to 32 mb (it is 256 on the production system)\n> > and I see 3 temp files. The first is ~ 1.8 mb. The second is ~55 mb and the last is ~150\n> > mb.\n> \n> As a note, the same data loaded into a non-\"C\" locale database took about\n> 42 seconds on the same machine, approximately 2.5x as long.\n\nI have investigated IBM's ICU (International Code for Unicode or smth\nlike that) in order to use it for implementing native UNICODE text\ntypes.\n\nThe sorting portion seems to work in two stages - 1. convert UTF_16 to\n\"sorting string\" and 2. compare said \"sorting strings\" - with the stages\nbeing also available separately.\n\nif the same is true for \"native\" locale support, then there is a good\nexplanation why the text sort is orders of magnitude slower than int\nsort: as the full conversion to \"sorting string\" has to be done at each\ncomparison (plus probably malloc/free) for locale-aware compare, but on\nmost cases in C locale one does not need these, plus the comparison can\nusually stop at first or second char.\n\nGetting good performance on locale-aware text sorts seems to require\nstoring these \"sorting strings\", either additionally or only these and\nfind a way for reverse conversion (\"sorting string\" --> original)\n\nSome speed could be gained by doing the original --> \"sorting string\"\nconversion only once for each line, but that will probably require a\nmajor rewrite of sorting code - in essence \n\nselect loctxt,a,b,c,d,e,f,g from mytab sort by localestring;\n\nshould become\n\nselect loctxt,a,b,c,d,e,f,g from (\n select localestring,a,b,c,d,e,f,g\n from mytab\n sort by sorting_string(loctxt)\n) t;\n\nor even\n\nselect loctxt,a,b,c,d,e,f,g from (\n select localestring,a,b,c,d,e,f,g, ss from (\n select localestring,a,b,c,d,e,f,g, sorting_string(loctxt) as ss from\n from mytab\n )\n sort by ss\n) t;\n\ndepending on how the second form is implemented (i.e. if\nsorting_string(loctxt) is evaluated once per row or one per compare)\n\n-------------\nHannu\n\n\n", "msg_date": "18 Nov 2002 02:10:12 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" }, { "msg_contents": "I've applied the attached patch to current sources (7.4devel). It\neliminates palloc/pfree overhead in varstr_cmp() for short strings\n(up to 1K as committed). I find that this reduces the sort time for\n700,000 rows by about 10% on my HPUX box; might be better on machines\nwith better-optimized strcoll().\n\n\t\t\tregards, tom lane\n\n*** src/backend/utils/adt/varlena.c.orig\tWed Sep 4 17:30:48 2002\n--- src/backend/utils/adt/varlena.c\tSun Nov 17 17:21:43 2002\n***************\n*** 736,771 ****\n varstr_cmp(char *arg1, int len1, char *arg2, int len2)\n {\n \tint\t\t\tresult;\n- \tchar\t *a1p,\n- \t\t\t *a2p;\n \n \t/*\n \t * Unfortunately, there is no strncoll(), so in the non-C locale case\n \t * we have to do some memory copying. This turns out to be\n \t * significantly slower, so we optimize the case where LC_COLLATE is\n! \t * C.\n \t */\n \tif (!lc_collate_is_c())\n \t{\n! \t\ta1p = (char *) palloc(len1 + 1);\n! \t\ta2p = (char *) palloc(len2 + 1);\n \n \t\tmemcpy(a1p, arg1, len1);\n! \t\t*(a1p + len1) = '\\0';\n \t\tmemcpy(a2p, arg2, len2);\n! \t\t*(a2p + len2) = '\\0';\n \n \t\tresult = strcoll(a1p, a2p);\n \n! \t\tpfree(a1p);\n! \t\tpfree(a2p);\n \t}\n \telse\n \t{\n! \t\ta1p = arg1;\n! \t\ta2p = arg2;\n! \n! \t\tresult = strncmp(a1p, a2p, Min(len1, len2));\n \t\tif ((result == 0) && (len1 != len2))\n \t\t\tresult = (len1 < len2) ? -1 : 1;\n \t}\n--- 736,782 ----\n varstr_cmp(char *arg1, int len1, char *arg2, int len2)\n {\n \tint\t\t\tresult;\n \n \t/*\n \t * Unfortunately, there is no strncoll(), so in the non-C locale case\n \t * we have to do some memory copying. This turns out to be\n \t * significantly slower, so we optimize the case where LC_COLLATE is\n! \t * C. We also try to optimize relatively-short strings by avoiding\n! \t * palloc/pfree overhead.\n \t */\n+ #define STACKBUFLEN\t\t1024\n+ \n \tif (!lc_collate_is_c())\n \t{\n! \t\tchar\ta1buf[STACKBUFLEN];\n! \t\tchar\ta2buf[STACKBUFLEN];\n! \t\tchar *a1p,\n! \t\t\t *a2p;\n! \n! \t\tif (len1 >= STACKBUFLEN)\n! \t\t\ta1p = (char *) palloc(len1 + 1);\n! \t\telse\n! \t\t\ta1p = a1buf;\n! \t\tif (len2 >= STACKBUFLEN)\n! \t\t\ta2p = (char *) palloc(len2 + 1);\n! \t\telse\n! \t\t\ta2p = a2buf;\n \n \t\tmemcpy(a1p, arg1, len1);\n! \t\ta1p[len1] = '\\0';\n \t\tmemcpy(a2p, arg2, len2);\n! \t\ta2p[len2] = '\\0';\n \n \t\tresult = strcoll(a1p, a2p);\n \n! \t\tif (len1 >= STACKBUFLEN)\n! \t\t\tpfree(a1p);\n! \t\tif (len2 >= STACKBUFLEN)\n! \t\t\tpfree(a2p);\n \t}\n \telse\n \t{\n! \t\tresult = strncmp(arg1, arg2, Min(len1, len2));\n \t\tif ((result == 0) && (len1 != len2))\n \t\t\tresult = (len1 < len2) ? -1 : 1;\n \t}\n", "msg_date": "Sun, 17 Nov 2002 18:05:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Some speed could be gained by doing the original --> \"sorting string\"\n> conversion only once for each line, but that will probably require a\n> major rewrite of sorting code - in essence\n\n> select loctxt,a,b,c,d,e,f,g from mytab sort by localestring;\n\n> should become\n\n> select loctxt,a,b,c,d,e,f,g from (\n> select localestring,a,b,c,d,e,f,g\n> from mytab\n> sort by sorting_string(loctxt)\n> ) t;\n\n> or even\n\n> select loctxt,a,b,c,d,e,f,g from (\n> select localestring,a,b,c,d,e,f,g, ss from (\n> select localestring,a,b,c,d,e,f,g, sorting_string(loctxt) as ss from\n> from mytab\n> )\n> sort by ss\n> ) t;\n\n> depending on how the second form is implemented (i.e. if\n> sorting_string(loctxt) is evaluated once per row or one per compare)\n\nIndeed the function call will be evaluated only once per row, so it\nwouldn't be too hard to kluge up a prototype implementation to test what\nthe real speed difference turns out to be. You'd basically need\n(a) a non-locale-aware set of comparison operators for type text ---\nyou might as well build a whole index opclass, so that non-locale-aware\nindexes could be made (this'd be a huge win for LIKE optimization too);\n(b) a strxfrm() function to produce the sortable strings.\n\nIf it turns out to be a big win, which is looking probable from the\ncomparisons Stephan and I just reported, then the next question is how\nto make the transformation occur automatically. I think it'd be\nrelatively simple to put a hack in the planner to do this when it's\nemitting a SORT operation that uses the locale-aware sort operators.\nIt'd be kind of an ugly special case, but surely no worse than the ones\nthat are in there already for LIKE and some other operators.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Nov 2002 18:54:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time " }, { "msg_contents": "Hi Tom,\n\nThe idea is very good.\nI recreated the tables and for all IDS_xxx I used name (not varchar(20)).\nThe the is also in unicode.\nI ran the query and got huge improvement!\nThe work time is 166 sek. ( before it was ~320 - 340 sek.).\n\nI will continue to make new tests and play around the setups.\nI think all this can be more quicker. I expect to get ~ 45-60 sek. ( this is the time in oracle), but laso 166 sek is good.\n\nI think that we need to work around the non us sorting and compare.\nIt is not possible to be so slow (all the functions are executed in memory\nand in java and by oracle and by ms all this is working very fast).\n\nregards,\nivan.\n\nTom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > As a note, the same data loaded into a non-\"C\" locale database took about\n> > 42 seconds on the same machine, approximately 2.5x as long.\n>\n> The non-C locale is undoubtedly the problem. I made a test dataset of\n> 700000 all-alphabetic 20-character random strings:\n>\n> $ head rand.data\n> duofoesrlycdnilvlcrg\n> crealjdrjpyczfbnlouo\n> lxaiyicslwjnxgpehtzp\n> ykizuovkvpkvvqsaocys\n> rkkvrqfiiybczwqdvvfu\n> stonxhbbvgwtjszodguv\n> prqxhwcfibiopjpiddud\n> ubgexbfdodhnauytebcf\n> urfoqifgbrladpssrwzw\n> ydcrsnxjpxospfqqoilw\n>\n> I performed the following experiment in 7.3 using a database in\n> en_US locale, SQL_ASCII encoding:\n>\n> enus=# create table vc20 (f1 varchar(20));\n> CREATE TABLE\n> enus=# \\copy vc20 from rand.data\n> \\.\n> enus=# vacuum analyze vc20;\n> VACUUM\n> enus=# set sort_mem to 50000;\n> SET\n> enus=# explain analyze select count(*) from\n> enus-# (select * from vc20 order by f1) ss;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=83607.48..83607.48 rows=1 width=24) (actual time=1058167.66..1058167.67 rows=1 loops=1)\n> -> Subquery Scan ss (cost=80107.48..81857.48 rows=700000 width=24) (actual time=1022972.86..1049559.50 rows=700000 loops=1)\n> -> Sort (cost=80107.48..81857.48 rows=700000 width=24) (actual time=1022972.80..1034036.58 rows=700000 loops=1)\n> Sort Key: f1\n> -> Seq Scan on vc20 (cost=0.00..12148.00 rows=700000 width=24) (actual time=0.20..24651.65 rows=700000 loops=1)\n> Total runtime: 1058220.10 msec\n> (6 rows)\n>\n> (The point of the select count(*) was to avoid shipping the result rows\n> to the client, but in hindsight \"explain analyze\" would suppress that\n> anyway. But the main datapoint here is the time for the Sort step.)\n>\n> I tried the test using datatype NAME as well, since it sorts using\n> plain strcmp() instead of strcoll():\n>\n> enus=# create table nm (f1 name);\n> CREATE TABLE\n> enus=# insert into nm select f1 from vc20;\n> INSERT 0 700000\n> enus=# vacuum analyze nm;\n> VACUUM\n> enus=# set sort_mem to 50000;\n> SET\n> enus=# explain analyze select count(*) from\n> enus-# (select * from nm order by f1) ss;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=122701.48..122701.48 rows=1 width=64) (actual time=157877.84..157877.85 rows=1 loops=1)\n> -> Subquery Scan ss (cost=119201.48..120951.48 rows=700000 width=64) (actual time=121286.65..149376.93 rows=700000 loops=1)\n> -> Sort (cost=119201.48..120951.48 rows=700000 width=64) (actual time=121286.60..134075.61 rows=700000 loops=1)\n> Sort Key: f1\n> -> Seq Scan on nm (cost=0.00..15642.00 rows=700000 width=64) (actual time=0.21..24150.57 rows=700000 loops=1)\n> Total runtime: 157962.79 msec\n> (6 rows)\n>\n> In C locale, the identical test sequence gives\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=83607.48..83607.48 rows=1 width=24) (actual time=187480.70..187480.71 rows=1 loops=1)\n> -> Subquery Scan ss (cost=80107.48..81857.48 rows=700000 width=24) (actual time=141100.03..178625.97 rows=700000 loops=1)\n> -> Sort (cost=80107.48..81857.48 rows=700000 width=24) (actual time=141099.98..162288.95 rows=700000 loops=1)\n> Sort Key: f1\n> -> Seq Scan on vc20 (cost=0.00..12148.00 rows=700000 width=24) (actual time=0.20..23954.71 rows=700000 loops=1)\n> Total runtime: 187565.79 msec\n> (6 rows)\n>\n> and of course about the same runtime as before for datatype NAME. So on\n> this platform (HPUX 10.20), en_US locale incurs about a 6x penalty over\n> C locale for sorting varchars.\n>\n> Note that NAME beats VARCHAR by a noticeable margin even in C locale,\n> despite the handicap of requiring much more I/O (being 64 bytes per row\n> not 24). This surprises me; it looks like varstr_cmp() is reasonably\n> well optimized in the C-locale case. But the real loser is VARCHAR in\n> non-C locales. I suspect the primary time sink is strcoll() not the\n> palloc/copy overhead in varstr_cmp(), but don't have time right now to\n> do profiling to prove it.\n>\n> Anyway, use of NAME instead of VARCHAR might be a workable workaround\n> if you cannot change your database locale to C.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n", "msg_date": "Mon, 18 Nov 2002 07:10:13 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "Hi,\nanoder important point of view forme:\n\nAs I know the size of sort_mem is given at the pg start\nand is not shared.\nAlso I can use set sort_mem to xxx;\nCan I dot set sort_mem to myvalue ; execute my query , set sort_mem to old_value; only\nfor querys that needet more sort memory?\n\nIf I can so will the new seted sort_mem be only for the opened connection or for connections?\nAlso will this dynamic sort_mem setting cause problems in pg?\n\nregards,\niavn.\n\n\n\n", "msg_date": "Mon, 18 Nov 2002 11:16:14 +0100", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sort time" }, { "msg_contents": "On Mon, 18 Nov 2002, pginfo wrote:\n\n> I think that we need to work around the non us sorting and compare.\n> It is not possible to be so slow (all the functions are executed in memory\n> and in java and by oracle and by ms all this is working very fast).\n\nI get similar results from the unix sort command, (8 sec elapsed for C\nlocale, 25 sec for en_US) on my redhat 8 machine (and I forced the buffer\nsize high enough to not get any temp files afaict).\n\nI'm not sure what platform Tom was using for his test, but maybe someone\ncould try this on a non x86/linux machine and see what they get (I don't\nhave access to one).\n\n", "msg_date": "Mon, 18 Nov 2002 03:56:50 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sort time" } ]
[ { "msg_contents": "\nIs it possible to configure the digest form for this list? It's starting\nto get busy (which is relative I know, but I'm on a lot of lists and prefer\ndigest...others too?).\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nMy other vehicle is my imagination.\n - bumper sticker\n\n", "msg_date": "Thu, 14 Nov 2002 13:46:29 -0800 (PST)", "msg_from": "Laurette Cisneros <[email protected]>", "msg_from_op": true, "msg_subject": "digest" }, { "msg_contents": "Laurette,\n\n> Is it possible to configure the digest form for this list? It's\n> starting\n> to get busy (which is relative I know, but I'm on a lot of lists and\n> prefer\n> digest...others too?).\n> \n> Thanks,\n\nAt some point in the past, you should have received an e-mail from\[email protected] with your user password and instructions on\nsetting list options. There is both a web inteface, and you can set\noptions by e-mail.\n\nThere is way to get your password if you don't have it, but I don't\nhave the commands in front of me, right now.\n\nIf I'm being an idiot, and you're telling me that there is no digest\nmode set up for the list, then you should e-mail Marc ([email protected])\nwith a polite request to set daily digests for the list.\n\n\n-Josh Berkus\n", "msg_date": "Thu, 14 Nov 2002 16:30:12 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: digest" }, { "msg_contents": "There is no digest set up for this list. I will send mail to Marc.\n\nThanks,\n\nL.\nOn Thu, 14 Nov 2002, Josh Berkus wrote:\n\n> Laurette,\n> \n> > Is it possible to configure the digest form for this list? It's\n> > starting\n> > to get busy (which is relative I know, but I'm on a lot of lists and\n> > prefer\n> > digest...others too?).\n> > \n> > Thanks,\n> \n> At some point in the past, you should have received an e-mail from\n> [email protected] with your user password and instructions on\n> setting list options. There is both a web inteface, and you can set\n> options by e-mail.\n> \n> There is way to get your password if you don't have it, but I don't\n> have the commands in front of me, right now.\n> \n> If I'm being an idiot, and you're telling me that there is no digest\n> mode set up for the list, then you should e-mail Marc ([email protected])\n> with a polite request to set daily digests for the list.\n> \n> \n> -Josh Berkus\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nMy other vehicle is my imagination.\n - bumper sticker\n\n", "msg_date": "Thu, 14 Nov 2002 16:34:53 -0800 (PST)", "msg_from": "Laurette Cisneros <[email protected]>", "msg_from_op": true, "msg_subject": "Re: digest" } ]
[ { "msg_contents": "Hi all,\n\ni've a doubt about how FOR/LOOP works in plpgsql.\n\nIt seems to me that the SELECT query executed in that way is much slower\nthat the same being executed interactively in psql.\n\nIn particular it seems that it doesn't make use of indexes.\n\nDoes it have any sense or am i wrong/missing something ?\n\nThanks all.\n\nCiao\n", "msg_date": "Fri, 15 Nov 2002 13:31:56 +0100", "msg_from": "Federico <[email protected]>", "msg_from_op": true, "msg_subject": "for/loop performance in plpgsql ?" }, { "msg_contents": "On Friday 15 Nov 2002 12:31 pm, Federico wrote:\n> Hi all,\n>\n> i've a doubt about how FOR/LOOP works in plpgsql.\n>\n> It seems to me that the SELECT query executed in that way is much slower\n> that the same being executed interactively in psql.\n>\n> In particular it seems that it doesn't make use of indexes.\n>\n> Does it have any sense or am i wrong/missing something ?\n\nWell - the query might well be pre-parsed which means it wouldn't notice any \nupdated stats. Can you provide an example of your code?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 15 Nov 2002 13:37:25 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for/loop performance in plpgsql ?" }, { "msg_contents": "On Fri, Nov 15, 2002 at 01:37:25PM +0000, Richard Huxton wrote:\n> On Friday 15 Nov 2002 12:31 pm, Federico wrote:\n> > Hi all,\n> >\n> > i've a doubt about how FOR/LOOP works in plpgsql.\n> >\n> > It seems to me that the SELECT query executed in that way is much slower\n> > that the same being executed interactively in psql.\n> >\n> > In particular it seems that it doesn't make use of indexes.\n> >\n> > Does it have any sense or am i wrong/missing something ?\n> \n> Well - the query might well be pre-parsed which means it wouldn't notice any \n> updated stats. Can you provide an example of your code?\n\nIt's nothing particular strange. It's something like :\n\nresult record;\n\nfor result in select rai, tem\n from data\n where (codice LIKE cod_staz and\n ora > orain and\n ora <= orafin) loop\n\n-- do some calculation\n\nend loop;\n\nIf i do the same select with pgsql it runs much faster. Is to be noticed\nthat the calculations it does in the loop are just \"light\", nothing that\nshould matter.\n\nI'll just investigate about this strange behaviour.\n\nThanks !\n\nCiao !\n", "msg_date": "Mon, 18 Nov 2002 16:02:17 +0100", "msg_from": "Federico <[email protected]>", "msg_from_op": true, "msg_subject": "Re: for/loop performance in plpgsql ?" }, { "msg_contents": "Federico <[email protected]> writes:\n>> Well - the query might well be pre-parsed which means it wouldn't notice any\n>> updated stats. Can you provide an example of your code?\n\n> It's nothing particular strange. It's something like :\n\n> result record;\n\n> for result in select rai, tem\n> from data\n> where (codice LIKE cod_staz and\n> ora > orain and\n> ora <= orafin) loop\n\nWhich of these names are columns of the selected tables, and which ones\nare plpgsql variables?\n\nThe planner has to fall back to default selectivity estimates when it's\nlooking at queries involving plpgsql variables (since it can't know\ntheir actual values in advance). I suspect your problem is related to\nan inaccurate default estimate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Nov 2002 00:23:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for/loop performance in plpgsql ? " }, { "msg_contents": "On Tue, Nov 19, 2002 at 12:23:56AM -0500, Tom Lane wrote:\n> > result record;\n> \n> > for result in select rai, tem\n> > from data\n> > where (codice LIKE cod_staz and\n> > ora > orain and\n> > ora <= orafin) loop\n> \n> Which of these names are columns of the selected tables, and which ones\n> are plpgsql variables?\n\nrai, tem, codice, ora \tare columns name\ncod_staz, orain, orafin are plpgsql variables\n\n> The planner has to fall back to default selectivity estimates when it's\n> looking at queries involving plpgsql variables (since it can't know\n> their actual values in advance). I suspect your problem is related to\n> an inaccurate default estimate.\n\nmmm... does it mean that i can't do anything about that ? \n\n", "msg_date": "Tue, 19 Nov 2002 10:49:26 +0100", "msg_from": "Federico /* juri */ Pedemonte <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for/loop performance in plpgsql ?" }, { "msg_contents": "Federico /* juri */ Pedemonte <[email protected]> writes:\n> On Tue, Nov 19, 2002 at 12:23:56AM -0500, Tom Lane wrote:\n>> The planner has to fall back to default selectivity estimates when it's\n>> looking at queries involving plpgsql variables (since it can't know\n>> their actual values in advance). I suspect your problem is related to\n>> an inaccurate default estimate.\n\n> mmm... does it mean that i can't do anything about that ? \n\nA brute-force solution is to use EXECUTE so that the query is re-planned\neach time, with the planner seeing constants instead of variables\ncompared to the column values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Nov 2002 09:04:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: for/loop performance in plpgsql ? " } ]
[ { "msg_contents": "Henrik,\n\n> I tested it, and it doesn't seem to change significantly\n> \n> Now I am trying the more regular vacuuming, too ... every 5 minutes\n> the most important tables are vacuumed... this, too, gives a bit\n> more performance.\n\nDisregard that last message. I just caught up with the list.\n\nIf you're VACUUMing constantly, then another setting you may want to\ntweak is vacuum_mem. Like the other memory settings, you're looking\nfor the \"sweet spot\" where vacuum_mem is high enough that your vacuums\nare over quickly, but low enough that it doesn't take away memory from\nother processes and slow them down.\n\nIt may be that you don't need to change the value at all. I would,\nhowever, try increasing and decreasing my vacuum_mem to find the level\nat which the frequent vacuums have the greatest benefit.\n\n-Josh Berkus\n", "msg_date": "Fri, 15 Nov 2002 09:22:32 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgrade to dual processor machine? " } ]
[ { "msg_contents": "Please send us an 'EXPLAIN ANALYZE' of the query.\n\nThanks\n\n\nOn Fri, 2002-11-15 at 15:51, Peter T. Brown wrote:\n> Hi--\n> \n> I have this rather long complex query that takes really long to complete\n> (please see below). It seems like I ought to improve the speed somehow.\n> I don't understand, for example, what the query planner is doing when it\n> says \"Hash\" and why this appears to take so long. And since I have a key\n> for Visitor.ID, I don't understand why its doing a sequential scan on\n> that table... \n> \n> Any advice would be greatly appreciated!\n> \n> \n> Thanks\n> \n> Peter\n> \n> \n> \n> \n> EXPLAIN SELECT \n> \"Visitor\".\"Created\",\n> \"Visitor\".\"Updated\",\n> \"Tidbit\".\"ID\",\n> \"ProgramEvent\".\"ID\",\n> \"Visitor\".\"Email\",\n> \"Interest\".\"ID\",\n> \"VisitorInternetDeviceAssoc\".\"ID\",\n> \"Referral\".\"ID\" \n> \n> FROM \"VisitorExtra\" \n> \n> LEFT OUTER JOIN Tidbit\" ON \n> \"VisitorExtra\".\"ID\"=\"Tidbit\".\"VisitorID\" \n> \n> LEFT OUTER JOIN \"ProgramEvent\" ON\n> \"VisitorExtra\".\"ID\"=\"ProgramEvent\".\"VisitorID\" \n> \n> LEFT OUTER JOIN \"Interest\" ON \n> \"VisitorExtra\".\"ID\"=\"Interest\".\"VisitorID\" \n> \n> LEFT OUTER JOIN \"VisitorInternetDeviceAssoc\" ON\n> \"VisitorExtra\".\"ID\"=\"VisitorInternetDeviceAssoc\".\"VisitorID\" \n> \n> LEFT OUTER JOIN \"Referral\" ON\n> \"VisitorExtra\".\"ID\"=\"Referral\".\"FromVisitorID\",\"Visitor\" \n> \n> WHERE \"VisitorExtra\".\"ID\"=\"Visitor\".\"ID\" AND \n> \"VisitorExtra\".\"ID\"= 325903;\n> \n> \n> \n> \n> QUERY PLAN:\n> \n> Hash Join (cost=14584.37..59037.79 rows=57747 width=76)\n> -> Merge Join (cost=0.00..36732.65 rows=57747 width=44)\n> -> Merge Join (cost=0.00..29178.16 rows=10681 width=36)\n> -> Nested Loop (cost=0.00..10505.74 rows=6674 width=28)\n> -> Nested Loop (cost=0.00..435.29 rows=177\n> \t\t\twidth=20)\n> -> Nested Loop (cost=0.00..15.70 rows=55\n> \t\t\t\twidth=12)\n> -> Index Scan using VisitorExtra_pkey\n> \t\t\t\t on VisitorExtra (cost=0.00..3.01 \t\t\t\t rows=1 width=4)\n> -> Index Scan using \t\t\t\t \n> Tidbit_VisitorID_key on Tidbit \t\t\t\t (cost=0.00..12.67 rows=2\n> width=8)\n> -> Index Scan using \t\t\t \t\t\t \n> ProgramEvent_VisitorID_key on ProgramEvent \t\t\t (cost=0.00..7.57\n> rows=2 width=8)\n> -> Index Scan using Interest_VisitorID_key on\n> \t\t\tInterest (cost=0.00..56.66 rows=19 width=8)\n> -> Index Scan using VisitorInternetDeviceAssoc_Visi on\n> \t\t VisitorInternetDeviceAssoc (cost=0.00..16402.90 rows=174887 \t\t \n> width=8)\n> -> Index Scan using Referral_FromVisitorID_key on Referral \n> \t (cost=0.00..6323.41 rows=87806 width=8)\n> -> Hash (cost=6061.79..6061.79 rows=317379 width=32)\n> -> Seq Scan on Visitor (cost=0.00..6061.79 rows=317379 \t \n> width=32)\n-- \nRod Taylor <[email protected]>\n", "msg_date": "15 Nov 2002 15:09:31 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can this query be made to run faster?" }, { "msg_contents": "Hi--\n\nI have this rather long complex query that takes really long to complete\n(please see below). It seems like I ought to improve the speed somehow.\nI don't understand, for example, what the query planner is doing when it\nsays \"Hash\" and why this appears to take so long. And since I have a key\nfor Visitor.ID, I don't understand why its doing a sequential scan on\nthat table... \n\nAny advice would be greatly appreciated!\n\n\nThanks\n\nPeter\n\n\n\n\nEXPLAIN SELECT \n \"Visitor\".\"Created\",\n \"Visitor\".\"Updated\",\n \"Tidbit\".\"ID\",\n \"ProgramEvent\".\"ID\",\n \"Visitor\".\"Email\",\n \"Interest\".\"ID\",\n \"VisitorInternetDeviceAssoc\".\"ID\",\n \"Referral\".\"ID\" \n\nFROM \"VisitorExtra\" \n\nLEFT OUTER JOIN Tidbit\" ON \n \"VisitorExtra\".\"ID\"=\"Tidbit\".\"VisitorID\" \n\nLEFT OUTER JOIN \"ProgramEvent\" ON\n \"VisitorExtra\".\"ID\"=\"ProgramEvent\".\"VisitorID\" \n\nLEFT OUTER JOIN \"Interest\" ON \n \"VisitorExtra\".\"ID\"=\"Interest\".\"VisitorID\" \n\nLEFT OUTER JOIN \"VisitorInternetDeviceAssoc\" ON\n \"VisitorExtra\".\"ID\"=\"VisitorInternetDeviceAssoc\".\"VisitorID\" \n\nLEFT OUTER JOIN \"Referral\" ON\n \"VisitorExtra\".\"ID\"=\"Referral\".\"FromVisitorID\",\"Visitor\" \n\nWHERE \"VisitorExtra\".\"ID\"=\"Visitor\".\"ID\" AND \n \"VisitorExtra\".\"ID\"= 325903;\n\n\n\n\nQUERY PLAN:\n\nHash Join (cost=14584.37..59037.79 rows=57747 width=76)\n -> Merge Join (cost=0.00..36732.65 rows=57747 width=44)\n -> Merge Join (cost=0.00..29178.16 rows=10681 width=36)\n -> Nested Loop (cost=0.00..10505.74 rows=6674 width=28)\n -> Nested Loop (cost=0.00..435.29 rows=177\n\t\t\twidth=20)\n -> Nested Loop (cost=0.00..15.70 rows=55\n\t\t\t\twidth=12)\n -> Index Scan using VisitorExtra_pkey\n\t\t\t\t on VisitorExtra (cost=0.00..3.01 \t\t\t\t rows=1 width=4)\n -> Index Scan using \t\t\t\t \nTidbit_VisitorID_key on Tidbit \t\t\t\t (cost=0.00..12.67 rows=2\nwidth=8)\n -> Index Scan using \t\t\t \t\t\t \nProgramEvent_VisitorID_key on ProgramEvent \t\t\t (cost=0.00..7.57\nrows=2 width=8)\n -> Index Scan using Interest_VisitorID_key on\n\t\t\tInterest (cost=0.00..56.66 rows=19 width=8)\n -> Index Scan using VisitorInternetDeviceAssoc_Visi on\n\t\t VisitorInternetDeviceAssoc (cost=0.00..16402.90 rows=174887 \t\t \nwidth=8)\n -> Index Scan using Referral_FromVisitorID_key on Referral \n\t (cost=0.00..6323.41 rows=87806 width=8)\n -> Hash (cost=6061.79..6061.79 rows=317379 width=32)\n -> Seq Scan on Visitor (cost=0.00..6061.79 rows=317379 \t \nwidth=32)\n\n\n-- \n\nPeter T. Brown\nDirector Of Technology\nMemetic Systems, Inc.\n\"Translating Customer Data Into Marketing Action.\"\n206.335.2927\nhttp://www.memeticsystems.com/\n\n", "msg_date": "15 Nov 2002 12:51:18 -0800", "msg_from": "\"Peter T. Brown\" <[email protected]>", "msg_from_op": false, "msg_subject": "can this query be made to run faster?" } ]
[ { "msg_contents": "Hi All, \n\nWe are using Postgres 7.1, on Solaris 8 - hardware is a 400mhz Netra X1,\n512Mb ram, with the database on a separate partition.\n\nOur main result tables are getting really big, and we don't want to delete\nany data yet. Currently, our largest table has around 10 million rows and\nis going up at a rate of around 1 million per month. The table has 13\ninteger, one boolean and one timestamp column. We index the table on an ID\nnumber and the timestamp. We vacuum analyse the table every night. The\nperformance has steadily degraded, and the more data we try and select, the\nlonger the select queries take.\nThe queries are not complex, and do not involve any unions etc, eg:\n\nSELECT * FROM table_name WHERE column1 = 454 AND time BETWEEN '2002-10-13\n13:44:00.0' AND '2002-11-14'\n\nSELECT count(DISTINCT id) FROM table_name WHERE column1 = 454 AND time\nBETWEEN '2002-10-13 13:44:00.0' AND '2002-11-14 \n\nSee various queries and explains at the end this email for more info on the\ntype of queries we are doing.\nMost of the queries use a sequence scan - disabling this and forcing index\nscan decreases performance further for those queries.\n\nThese queries are sometimes taking over 2 minutes to perform!!!! If we\nreduce the table size significantly (i.e. around 1 million rows)is is\nobviously faster - down to a few seconds.\n\nWe then tried the DB on a clean installation of Solaris 9, on a dual 400mhz\nprocessor SunE250 with 2Gb ram, and 2 scsi 17gb disks. We put the database\nonto the second disk. Surprisingly the performance is only 5-10% greater.\nI expected far more, due to the increased power of the machine. Looking at\nthe os info on this machine, the IO wait is negligible as is the cpu usage.\nSo this machine is not working as hard as the Netra X1, though the time\ntaken to perform queries is not too much different.\n\nWe have tried tweaking the shared buffers and sort mem (also tweaking kernel\nshared mem size), which make little difference, and in fact if we increase\nit to around 25% of total memory performance degrades slightly. We have\nchanged from the default amount of shared buffers, to 64000 to give us\naccess to 25% of the total system memory.\n\nAny ideas on how we can select data more quickly from large tables?\n\nOther ideas we had was to split the data over multiple table by id\n(resulting in several thousand tables), however this would make management\nof the database in terms of keys, triggers and integrity very difficult and\nmessy.\n\n\nI hope someone can offer some advice.\n\nCheers\n\nNikk\n\n- Queries and explain plans\n\nselect count(*) from table_name;\nNOTICE: QUERY PLAN:\nAggregate (cost=488700.65..488700.65 rows=1 width=0)\n -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=0)\n\nhawkdb=# explain select count(job_id) from table_name;\nNOTICE: QUERY PLAN:\nAggregate (cost=488700.65..488700.65 rows=1 width=4)\n -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=4)\n\nhawkdb=# explain select * from table_name;\nNOTICE: QUERY PLAN:\nSeq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n\nhawkdb=# explain select count(*) from table_name where job_id = 13;\nNOTICE: QUERY PLAN:\nAggregate (cost=537874.18..537874.18 rows=1 width=0)\n -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 width=0)\n\nhawkdb=# explain select * from table_name where job_id = 13;\nNOTICE: QUERY PLAN:\nSeq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n\nhawkdb=# explain select * from table_name where job_id = 1;\nNOTICE: QUERY PLAN:\nIndex Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\nwidth=57)\n\nhawkdb=#explain select * from table_name where time > '2002-10-10';\nNOTICE: QUERY PLAN:\nSeq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n\nhawkdb=# explain select * from http_result where time < '2002-10-10';\nNOTICE: QUERY PLAN:\nIndex Scan using table_name_time on table_name (cost=0.00..75879.17\nrows=19669 width=57)\n\n\n\nNikk Anderson\n\nParallel ltd.\nCranfield Innovation Centre\nUniversity Way\nCranfield\nBedfordshire\nMK43 0BT \n\nhttp://www.nexuswatch.com\nhttp://www.parallel.ltd.uk\n\nTel: +44 (0)8700 PARALLEL (727255)\nFax: +44 (0)8700 PARAFAX (727232) \n\n\n******************************************************************\nPrivileged/Confidential Information may be contained in this\nmessage. If you are not the addressee indicated in this message\n(or responsible for delivery of the message to such person), you\nmay not copy or deliver this message to anyone. In such case, you\nshould destroy this message and kindly notify the sender by reply\nemail. Please advise immediately if you or your employer do not\nconsent to Internet email for messages of this kind. Opinions,\nconclusions and other information in this message that do not\nrelate to the official business of Parallel shall be understood\nas neither given nor endorsed by it.\n\nUnless agreed otherwise by way of a signed agreement, any business\nconducted by Parallel shall be subject to its Standard Terms\nand Conditions which are available upon request.\n****************************************************************** \n\n\n\n\n\nselects from large tables\n\n\nHi All, \n\nWe are using Postgres 7.1, on Solaris 8 - hardware is a 400mhz Netra X1, 512Mb ram, with the database on a separate partition.\nOur main result tables are getting really big, and we don't want to delete any data yet.  Currently, our largest table has around 10 million rows and is going up at a rate of around 1 million per month.  The table has 13 integer, one boolean and one timestamp column.  We index the table on an ID number and the timestamp.  We vacuum analyse the table every night.  The performance has steadily degraded, and the more data we try and select, the longer the select queries take.\nThe queries are not complex, and do not involve any unions etc, eg:\n\nSELECT * FROM table_name WHERE column1 = 454 AND time BETWEEN '2002-10-13 13:44:00.0' AND '2002-11-14'\n\nSELECT count(DISTINCT id) FROM table_name WHERE column1 = 454 AND time BETWEEN '2002-10-13 13:44:00.0' AND '2002-11-14 \n\nSee various queries and explains at the end this email for more info on the type of queries we are doing.\nMost of the queries use a sequence scan - disabling this and forcing index scan decreases performance further for those queries.\nThese queries are sometimes taking over 2 minutes to perform!!!! If we reduce the table size significantly (i.e. around 1 million rows)is is obviously faster - down to a few seconds.\nWe then tried the DB on a clean installation of Solaris 9, on a dual 400mhz processor SunE250 with 2Gb ram, and 2 scsi 17gb disks.  We put the database onto the second disk.  Surprisingly the performance is only 5-10% greater.  I expected far more, due to the increased power of the machine.  Looking at the os info on this machine, the IO wait is negligible as is the cpu usage.  So this machine is not working as hard as the Netra X1, though the time taken to perform queries is not too much different.\nWe have tried tweaking the shared buffers and sort mem (also tweaking kernel shared mem size), which make little difference, and in fact if we increase it to around 25% of total memory performance degrades slightly.  We have changed from the default amount of shared buffers, to 64000 to give us access to 25% of the total system memory.\nAny ideas on how we can select data more quickly from large tables?\n\nOther ideas we had was to split the data over multiple table by id (resulting in several thousand tables), however this would make management of the database in terms of keys, triggers and integrity very difficult and messy.\n\nI hope someone can offer some advice.\n\nCheers\n\nNikk\n\n- Queries and explain plans\n\nselect count(*) from table_name;\nNOTICE:  QUERY PLAN:\nAggregate  (cost=488700.65..488700.65 rows=1 width=0)\n  ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=0)\n\nhawkdb=# explain select count(job_id) from table_name;\nNOTICE:  QUERY PLAN:\nAggregate  (cost=488700.65..488700.65 rows=1 width=4)\n  ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=4)\n\nhawkdb=# explain select * from table_name;\nNOTICE:  QUERY PLAN:\nSeq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=57)\n\nhawkdb=# explain select count(*) from table_name where job_id = 13;\nNOTICE:  QUERY PLAN:\nAggregate  (cost=537874.18..537874.18 rows=1 width=0)\n  ->  Seq Scan on table_name  (cost=0.00..488700.65 rows=19669412 width=0)\n\nhawkdb=# explain select * from table_name where job_id = 13;\nNOTICE:  QUERY PLAN:\nSeq Scan on http_result  (cost=0.00..488700.65 rows=19669412 width=57)\n\nhawkdb=# explain select * from table_name where job_id = 1;\nNOTICE:  QUERY PLAN:\nIndex Scan using http_result_pk on table_name  (cost=0.00..5.01 rows=1 width=57)\n\nhawkdb=#explain select * from table_name where time > '2002-10-10';\nNOTICE:  QUERY PLAN:\nSeq Scan on table_name  (cost=0.00..488700.65 rows=19649743 width=57)\n\nhawkdb=# explain select * from http_result where time < '2002-10-10';\nNOTICE:  QUERY PLAN:\nIndex Scan using table_name_time on table_name  (cost=0.00..75879.17 rows=19669 width=57)\n\n\n\nNikk Anderson\n\nParallel ltd.\nCranfield Innovation Centre\nUniversity Way\nCranfield\nBedfordshire\nMK43 0BT \n\nhttp://www.nexuswatch.com\nhttp://www.parallel.ltd.uk\n\nTel: +44 (0)8700 PARALLEL (727255)\nFax: +44 (0)8700 PARAFAX  (727232) \n\n\n******************************************************************\nPrivileged/Confidential Information may be contained in this\nmessage.  If you are not the addressee indicated in this message\n(or responsible for delivery of the message to such person), you\nmay not copy or deliver this message to anyone. In such case, you\nshould destroy this message and kindly notify the sender by reply\nemail. Please advise immediately if you or your employer do not\nconsent to Internet email for messages of this kind.  Opinions,\nconclusions and other information in this message that do not\nrelate to the official business of Parallel shall be understood\nas neither given nor endorsed by it.\n\nUnless agreed otherwise by way of a signed agreement, any business\nconducted by Parallel shall be subject to its Standard Terms\nand Conditions which are available upon request.\n******************************************************************", "msg_date": "Mon, 18 Nov 2002 12:32:45 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "selects from large tables" }, { "msg_contents": "\nOn Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n> Any ideas on how we can select data more quickly from large tables?\n\nAre these row estimates realistic? It's estimating nearly 20 million rows\nto be returned by some of the queries (unless I'm misreading the\nnumber - possible since it's 5am here). At that point you almost\ncertainly want to be using a cursor rather than plain queries since even a\nsmall width result (say 50 bytes) gives a very large (1 gig) result set.\n\n> - Queries and explain plans\n>\n> select count(*) from table_name;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=488700.65..488700.65 rows=1 width=0)\n> -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=0)\n>\n> hawkdb=# explain select count(job_id) from table_name;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=488700.65..488700.65 rows=1 width=4)\n> -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=4)\n>\n> hawkdb=# explain select * from table_name;\n> NOTICE: QUERY PLAN:\n> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n>\n> hawkdb=# explain select count(*) from table_name where job_id = 13;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=537874.18..537874.18 rows=1 width=0)\n> -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 width=0)\n>\n> hawkdb=# explain select * from table_name where job_id = 13;\n> NOTICE: QUERY PLAN:\n> Seq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n>\n> hawkdb=# explain select * from table_name where job_id = 1;\n> NOTICE: QUERY PLAN:\n> Index Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\n> width=57)\n>\n> hawkdb=#explain select * from table_name where time > '2002-10-10';\n> NOTICE: QUERY PLAN:\n> Seq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n>\n> hawkdb=# explain select * from http_result where time < '2002-10-10';\n> NOTICE: QUERY PLAN:\n> Index Scan using table_name_time on table_name (cost=0.00..75879.17\n> rows=19669 width=57)\n\n", "msg_date": "Mon, 18 Nov 2002 05:02:03 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n>> Any ideas on how we can select data more quickly from large tables?\n\n> Are these row estimates realistic?\n\nShowing EXPLAIN ANALYZE results would be much more useful than just\nEXPLAIN.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 10:03:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables " } ]
[ { "msg_contents": "Hi, \nThanks for the reply Stephen, the data is 'somewhat' realistic.....\n\nThe data in the table is actually synthetic, but the structure is the same\nas our live system, and the queries are similar to those we actually carry\nout. \n\nAs the data was synthetic there was a bit of repetition (19 million rows of\nrepetition!! ) of the item used in the where clause, meaning that most of\nthe table was returned by the queries - oops! So, I have done is some more\nrealistic queries from our live system, and put the time it takes, and the\nexplain results. Just to note that the explain's estimated number of rows\nis way out - its guesses are way too low.\n\nTypically a normal query on our live system returns between 200 and 30000\nrows depending on the reports a user wants to generate. In prior testing,\nwe noted that using SELECT COUNT( .. was slower than other queries, which\nis why we though we would test counts first.\n\n\nHere are some more realistic results, which still take a fair whack of\ntime........\n\n\nStarting query 0\nQuery 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' AND\n'2002-11-18 14:08:58.021' AND job_id = 335\nTime taken = 697 ms\nIndex Scan using http_timejobid on xx (cost=0.00..17.01 rows=4 width=57)\nThis query returns 500 rows of data\n\n\nStarting query 1\nQuery 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' AND\n'2002-11-18 14:08:58.021' AND job_id = 335\nTime taken = 15 seconds\nIndex Scan using http_timejobid on xx (cost=0.00..705.57 rows=175 width=57)\nThis query return 3582 rows\n\nStarting query 2\nQuery 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' AND\n'2002-11-18 14:08:58.021' AND job_id = 335;\nTime taken = 65 seconds\nIndex Scan using http_timejobid on xx (cost=0.00..3327.55 rows=832\nwidth=57)\nThis query returns 15692 rows \n\nStarting query 3\nQuery 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20\n15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\nTime taken = 241 seconds\nIndex Scan using http_timejobid on xx (cost=0.00..10111.36 rows=2547\nwidth=57)\nThis query returns 48768 rows \n\n\nCheers\n\nNikk\n\n\n\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:[email protected]]\nSent: 18 November 2002 13:02\nTo: Nikk Anderson\nCc: [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\n\nOn Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n> Any ideas on how we can select data more quickly from large tables?\n\nAre these row estimates realistic? It's estimating nearly 20 million rows\nto be returned by some of the queries (unless I'm misreading the\nnumber - possible since it's 5am here). At that point you almost\ncertainly want to be using a cursor rather than plain queries since even a\nsmall width result (say 50 bytes) gives a very large (1 gig) result set.\n\n> - Queries and explain plans\n>\n> select count(*) from table_name;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=488700.65..488700.65 rows=1 width=0)\n> -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=0)\n>\n> hawkdb=# explain select count(job_id) from table_name;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=488700.65..488700.65 rows=1 width=4)\n> -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=4)\n>\n> hawkdb=# explain select * from table_name;\n> NOTICE: QUERY PLAN:\n> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n>\n> hawkdb=# explain select count(*) from table_name where job_id = 13;\n> NOTICE: QUERY PLAN:\n> Aggregate (cost=537874.18..537874.18 rows=1 width=0)\n> -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 width=0)\n>\n> hawkdb=# explain select * from table_name where job_id = 13;\n> NOTICE: QUERY PLAN:\n> Seq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n>\n> hawkdb=# explain select * from table_name where job_id = 1;\n> NOTICE: QUERY PLAN:\n> Index Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\n> width=57)\n>\n> hawkdb=#explain select * from table_name where time > '2002-10-10';\n> NOTICE: QUERY PLAN:\n> Seq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n>\n> hawkdb=# explain select * from http_result where time < '2002-10-10';\n> NOTICE: QUERY PLAN:\n> Index Scan using table_name_time on table_name (cost=0.00..75879.17\n> rows=19669 width=57)\n\n\n\n\n\nRE: [PERFORM] selects from large tables\n\n\nHi, \nThanks for the reply Stephen, the data is 'somewhat' realistic.....\n\nThe data in the table is actually synthetic, but the structure is the same as our live system, and the queries are similar to those we actually carry out.  \nAs the data was synthetic there was a bit of repetition (19 million rows of repetition!! ) of the item used in the where clause, meaning that most of the table was returned by the queries - oops!  So, I have done is some more realistic queries from our live system, and put the time it takes, and the explain results.  Just to note that the explain's estimated number of rows is way out - its guesses are way too low.\nTypically a normal query on our live system returns between 200 and 30000 rows depending on the reports a user wants to generate.  In prior testing, we noted that using SELECT COUNT( ..   was slower than other queries, which is why we though we would test counts first.\n\nHere are some more realistic results, which still take a fair whack of time........\n\n\nStarting query 0\nQuery 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335\nTime taken = 697 ms\nIndex Scan using http_timejobid on xx  (cost=0.00..17.01 rows=4 width=57)\nThis query returns 500 rows of data\n\n\nStarting query 1\nQuery 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335\nTime taken = 15 seconds\nIndex Scan using http_timejobid on xx  (cost=0.00..705.57 rows=175 width=57)\nThis query return 3582 rows\n\nStarting query 2\nQuery 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\nTime taken = 65 seconds\nIndex Scan using http_timejobid on xx  (cost=0.00..3327.55 rows=832 width=57)\nThis query returns 15692 rows \n\nStarting query 3\nQuery 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\nTime taken = 241 seconds\nIndex Scan using http_timejobid on xx  (cost=0.00..10111.36 rows=2547 width=57)\nThis query returns 48768 rows \n\n\nCheers\n\nNikk\n\n\n\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:[email protected]]\nSent: 18 November 2002 13:02\nTo: Nikk Anderson\nCc: [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\n\nOn Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n> Any ideas on how we can select data more quickly from large tables?\n\nAre these row estimates realistic? It's estimating nearly 20 million rows\nto be returned by some of the queries (unless I'm misreading the\nnumber - possible since it's 5am here).  At that point you almost\ncertainly want to be using a cursor rather than plain queries since even a\nsmall width result (say 50 bytes) gives a very large (1 gig) result set.\n\n> - Queries and explain plans\n>\n> select count(*) from table_name;\n> NOTICE:  QUERY PLAN:\n> Aggregate  (cost=488700.65..488700.65 rows=1 width=0)\n>   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=0)\n>\n> hawkdb=# explain select count(job_id) from table_name;\n> NOTICE:  QUERY PLAN:\n> Aggregate  (cost=488700.65..488700.65 rows=1 width=4)\n>   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=4)\n>\n> hawkdb=# explain select * from table_name;\n> NOTICE:  QUERY PLAN:\n> Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=57)\n>\n> hawkdb=# explain select count(*) from table_name where job_id = 13;\n> NOTICE:  QUERY PLAN:\n> Aggregate  (cost=537874.18..537874.18 rows=1 width=0)\n>   ->  Seq Scan on table_name  (cost=0.00..488700.65 rows=19669412 width=0)\n>\n> hawkdb=# explain select * from table_name where job_id = 13;\n> NOTICE:  QUERY PLAN:\n> Seq Scan on http_result  (cost=0.00..488700.65 rows=19669412 width=57)\n>\n> hawkdb=# explain select * from table_name where job_id = 1;\n> NOTICE:  QUERY PLAN:\n> Index Scan using http_result_pk on table_name  (cost=0.00..5.01 rows=1\n> width=57)\n>\n> hawkdb=#explain select * from table_name where time > '2002-10-10';\n> NOTICE:  QUERY PLAN:\n> Seq Scan on table_name  (cost=0.00..488700.65 rows=19649743 width=57)\n>\n> hawkdb=# explain select * from http_result where time < '2002-10-10';\n> NOTICE:  QUERY PLAN:\n> Index Scan using table_name_time on table_name  (cost=0.00..75879.17\n> rows=19669 width=57)", "msg_date": "Mon, 18 Nov 2002 15:31:34 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "Nikk:\n\nAre you doing vaccums on these tables? I was under the understanding \nthat the estimated row count should be close to the real row count \nreturned, and when it is not (as it looks in your case), the primary \nreason for the disconnect is that the stats for the tables are \nout-of-date. \n\nSince it used the indexes, I am not sure if the old stats are causing \nany issues, but I suspect they are not helping. \n\nAlso, do you do any clustering of the data (since the queries are mostly \ntime limited)? I am wondering if the system is doing lots of seeks to \nget the data (implying that the data is all over the disk and not \nclustered). \n\nCharlie\n\nNikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n>\n> The data in the table is actually synthetic, but the structure is the \n> same as our live system, and the queries are similar to those we \n> actually carry out. \n>\n> As the data was synthetic there was a bit of repetition (19 million \n> rows of repetition!! ) of the item used in the where clause, meaning \n> that most of the table was returned by the queries - oops! So, I have \n> done is some more realistic queries from our live system, and put the \n> time it takes, and the explain results. Just to note that the \n> explain's estimated number of rows is way out - its guesses are way \n> too low.\n>\n> Typically a normal query on our live system returns between 200 and \n> 30000 rows depending on the reports a user wants to generate. In \n> prior testing, we noted that using SELECT COUNT( .. was slower than \n> other queries, which is why we though we would test counts first.\n>\n>\n> Here are some more realistic results, which still take a fair whack of \n> time........\n>\n>\n> Starting query 0\n> Query 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 697 ms\n> Index Scan using http_timejobid on xx (cost=0.00..17.01 rows=4 width=57)\n> This query returns 500 rows of data\n>\n>\n> Starting query 1\n> Query 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 15 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..705.57 rows=175 \n> width=57)\n> This query return 3582 rows\n>\n> Starting query 2\n> Query 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335;\n> Time taken = 65 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..3327.55 rows=832 \n> width=57)\n> This query returns 15692 rows\n>\n> Starting query 3\n> Query 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 \n> 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\n>\n> Time taken = 241 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..10111.36 rows=2547 \n> width=57)\n> This query returns 48768 rows\n>\n>\n> Cheers\n>\n> Nikk\n>\n>\n>\n>\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: 18 November 2002 13:02\n> To: Nikk Anderson\n> Cc: [email protected]\n> Subject: Re: [PERFORM] selects from large tables\n>\n>\n>\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n>\n> > Any ideas on how we can select data more quickly from large tables?\n>\n> Are these row estimates realistic? It's estimating nearly 20 million rows\n> to be returned by some of the queries (unless I'm misreading the\n> number - possible since it's 5am here). At that point you almost\n> certainly want to be using a cursor rather than plain queries since even a\n> small width result (say 50 bytes) gives a very large (1 gig) result set.\n>\n> > - Queries and explain plans\n> >\n> > select count(*) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select count(job_id) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=4)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=4)\n> >\n> > hawkdb=# explain select * from table_name;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select count(*) from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=537874.18..537874.18 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 1;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\n> > width=57)\n> >\n> > hawkdb=#explain select * from table_name where time > '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n> >\n> > hawkdb=# explain select * from http_result where time < '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Index Scan using table_name_time on table_name (cost=0.00..75879.17\n> > rows=19669 width=57)\n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Mon, 18 Nov 2002 10:46:11 -0500", "msg_from": "\"Charles H. Woloszynski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "\"Charles H. Woloszynski\" <[email protected]> writes:\n> Are you doing vaccums on these tables? I was under the understanding \n> that the estimated row count should be close to the real row count \n> returned, and when it is not (as it looks in your case), the primary \n> reason for the disconnect is that the stats for the tables are \n> out-of-date. \n\nThe fact that he's using 7.1 doesn't help any; the statistics mechanisms\nin 7.1 are pretty weak compared to 7.2.\n\n> Also, do you do any clustering of the data (since the queries are mostly \n> time limited)? I am wondering if the system is doing lots of seeks to \n> get the data (implying that the data is all over the disk and not \n> clustered).\n\nIt would also be interesting to try a two-column index ordered the other\nway (timestamp as the major sort key instead of ID). Can't tell if that\nwill be a win without more info about the data properties, but it's\nworth looking at.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Nov 2002 11:25:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables " }, { "msg_contents": "On Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n\nTom's said most of what I would have, except that if you've got a wide\nvariation based on job_id you may want to change the statistics gathering\ndefaults for that column with ALTER TABLE ALTER COLUMN SET STATISTICS when\nyou get to 7.2.\n\n\n", "msg_date": "Mon, 18 Nov 2002 08:32:46 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "Hi, \n\nUnfortunately explain analyze does not work on our postgres version (7.1) ?\n\nI think I will download and compile 7.2, and try to compile in 64bit mode to\nsee if that helps improve performance.\n\nCheers\n\nNikk \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 18 November 2002 15:04\nTo: Stephan Szabo\nCc: Nikk Anderson; [email protected]\nSubject: Re: [PERFORM] selects from large tables \n\n\nStephan Szabo <[email protected]> writes:\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n>> Any ideas on how we can select data more quickly from large tables?\n\n> Are these row estimates realistic?\n\nShowing EXPLAIN ANALYZE results would be much more useful than just\nEXPLAIN.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] selects from large tables \n\n\nHi, \n\nUnfortunately explain analyze does not work on our postgres version (7.1) ?\n\nI think I will download and compile 7.2, and try to compile in 64bit mode to see if that helps improve performance.\n\nCheers\n\nNikk \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 18 November 2002 15:04\nTo: Stephan Szabo\nCc: Nikk Anderson; [email protected]\nSubject: Re: [PERFORM] selects from large tables \n\n\nStephan Szabo <[email protected]> writes:\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n\n>> Any ideas on how we can select data more quickly from large tables?\n\n> Are these row estimates realistic?\n\nShowing EXPLAIN ANALYZE results would be much more useful than just\nEXPLAIN.\n\n                        regards, tom lane", "msg_date": "Mon, 18 Nov 2002 15:36:08 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables " }, { "msg_contents": "On Mon, Nov 18, 2002 at 03:36:08PM -0000, Nikk Anderson wrote:\n> Hi, \n> \n> Unfortunately explain analyze does not work on our postgres version (7.1) ?\n\nNo, it doesn't.\n\n> I think I will download and compile 7.2, and try to compile in 64bit mode to\n> see if that helps improve performance.\n\nI have seen something like a 40% improvement in performance from 7.1\nto 7.2 on Solaris 7 in my tests. There are some problems with the 64\nbit compilation, by the way, so make sure that you check out the\nFAQ and test carefully. You need to make some modifications of the\nsource files in order to avoid some buggly libraries on Solaris.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 18 Nov 2002 10:43:02 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "Hi Charlie, \nWe do a vacuum analyze every night at midnight. I thought that perhaps the\nanalyzing was not being done correctly, so I manually did a vacuum analyze\nand the estimated row counts were way still out. \nI will look into clustering the data and see what effect that may have.\n\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Charles H. Woloszynski [mailto:[email protected]]\nSent: 18 November 2002 15:46\nTo: Nikk Anderson\nCc: 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\nNikk:\n\nAre you doing vaccums on these tables? I was under the understanding \nthat the estimated row count should be close to the real row count \nreturned, and when it is not (as it looks in your case), the primary \nreason for the disconnect is that the stats for the tables are \nout-of-date. \n\nSince it used the indexes, I am not sure if the old stats are causing \nany issues, but I suspect they are not helping. \n\nAlso, do you do any clustering of the data (since the queries are mostly \ntime limited)? I am wondering if the system is doing lots of seeks to \nget the data (implying that the data is all over the disk and not \nclustered). \n\nCharlie\n\nNikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n>\n> The data in the table is actually synthetic, but the structure is the \n> same as our live system, and the queries are similar to those we \n> actually carry out. \n>\n> As the data was synthetic there was a bit of repetition (19 million \n> rows of repetition!! ) of the item used in the where clause, meaning \n> that most of the table was returned by the queries - oops! So, I have \n> done is some more realistic queries from our live system, and put the \n> time it takes, and the explain results. Just to note that the \n> explain's estimated number of rows is way out - its guesses are way \n> too low.\n>\n> Typically a normal query on our live system returns between 200 and \n> 30000 rows depending on the reports a user wants to generate. In \n> prior testing, we noted that using SELECT COUNT( .. was slower than \n> other queries, which is why we though we would test counts first.\n>\n>\n> Here are some more realistic results, which still take a fair whack of \n> time........\n>\n>\n> Starting query 0\n> Query 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 697 ms\n> Index Scan using http_timejobid on xx (cost=0.00..17.01 rows=4 width=57)\n> This query returns 500 rows of data\n>\n>\n> Starting query 1\n> Query 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 15 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..705.57 rows=175 \n> width=57)\n> This query return 3582 rows\n>\n> Starting query 2\n> Query 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335;\n> Time taken = 65 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..3327.55 rows=832 \n> width=57)\n> This query returns 15692 rows\n>\n> Starting query 3\n> Query 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 \n> 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\n>\n> Time taken = 241 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..10111.36 rows=2547 \n> width=57)\n> This query returns 48768 rows\n>\n>\n> Cheers\n>\n> Nikk\n>\n>\n>\n>\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: 18 November 2002 13:02\n> To: Nikk Anderson\n> Cc: [email protected]\n> Subject: Re: [PERFORM] selects from large tables\n>\n>\n>\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n>\n> > Any ideas on how we can select data more quickly from large tables?\n>\n> Are these row estimates realistic? It's estimating nearly 20 million rows\n> to be returned by some of the queries (unless I'm misreading the\n> number - possible since it's 5am here). At that point you almost\n> certainly want to be using a cursor rather than plain queries since even a\n> small width result (say 50 bytes) gives a very large (1 gig) result set.\n>\n> > - Queries and explain plans\n> >\n> > select count(*) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select count(job_id) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=4)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=4)\n> >\n> > hawkdb=# explain select * from table_name;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select count(*) from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=537874.18..537874.18 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 1;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\n> > width=57)\n> >\n> > hawkdb=#explain select * from table_name where time > '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n> >\n> > hawkdb=# explain select * from http_result where time < '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Index Scan using table_name_time on table_name (cost=0.00..75879.17\n> > rows=19669 width=57)\n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n\n\nRE: [PERFORM] selects from large tables\n\n\nHi Charlie, \nWe do a vacuum analyze every night at midnight.  I thought that perhaps the analyzing was not being done correctly, so I manually did a vacuum analyze and the estimated row counts were way still out.  \nI will look into clustering the data and see what effect that may have.\n\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Charles H. Woloszynski [mailto:[email protected]]\nSent: 18 November 2002 15:46\nTo: Nikk Anderson\nCc: 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\nNikk:\n\nAre you doing vaccums on these tables?  I was under the understanding \nthat the estimated row count should be close to the real row count \nreturned, and when it is not (as it looks in your case), the primary \nreason for the disconnect is that the stats for the tables are \nout-of-date.  \n\nSince it used the indexes, I am not sure if the old stats are causing \nany issues, but I suspect they are not helping.  \n\nAlso, do you do any clustering of the data (since the queries are mostly \ntime limited)?  I am wondering if the system is doing lots of seeks to \nget the data (implying that the data is all over the disk and not \nclustered).  \n\nCharlie\n\nNikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n>\n> The data in the table is actually synthetic, but the structure is the \n> same as our live system, and the queries are similar to those we \n> actually carry out. \n>\n> As the data was synthetic there was a bit of repetition (19 million \n> rows of repetition!! ) of the item used in the where clause, meaning \n> that most of the table was returned by the queries - oops!  So, I have \n> done is some more realistic queries from our live system, and put the \n> time it takes, and the explain results.  Just to note that the \n> explain's estimated number of rows is way out - its guesses are way \n> too low.\n>\n> Typically a normal query on our live system returns between 200 and \n> 30000 rows depending on the reports a user wants to generate.  In \n> prior testing, we noted that using SELECT COUNT( ..   was slower than \n> other queries, which is why we though we would test counts first.\n>\n>\n> Here are some more realistic results, which still take a fair whack of \n> time........\n>\n>\n> Starting query 0\n> Query 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 697 ms\n> Index Scan using http_timejobid on xx  (cost=0.00..17.01 rows=4 width=57)\n> This query returns 500 rows of data\n>\n>\n> Starting query 1\n> Query 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 15 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..705.57 rows=175 \n> width=57)\n> This query return 3582 rows\n>\n> Starting query 2\n> Query 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335;\n> Time taken = 65 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..3327.55 rows=832 \n> width=57)\n> This query returns 15692 rows\n>\n> Starting query 3\n> Query 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 \n> 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\n>\n> Time taken = 241 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..10111.36 rows=2547 \n> width=57)\n> This query returns 48768 rows\n>\n>\n> Cheers\n>\n> Nikk\n>\n>\n>\n>\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: 18 November 2002 13:02\n> To: Nikk Anderson\n> Cc: [email protected]\n> Subject: Re: [PERFORM] selects from large tables\n>\n>\n>\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n>\n> > Any ideas on how we can select data more quickly from large tables?\n>\n> Are these row estimates realistic? It's estimating nearly 20 million rows\n> to be returned by some of the queries (unless I'm misreading the\n> number - possible since it's 5am here).  At that point you almost\n> certainly want to be using a cursor rather than plain queries since even a\n> small width result (say 50 bytes) gives a very large (1 gig) result set.\n>\n> > - Queries and explain plans\n> >\n> > select count(*) from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=488700.65..488700.65 rows=1 width=0)\n> >   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select count(job_id) from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=488700.65..488700.65 rows=1 width=4)\n> >   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 \n> width=4)\n> >\n> > hawkdb=# explain select * from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select count(*) from table_name where job_id = 13;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=537874.18..537874.18 rows=1 width=0)\n> >   ->  Seq Scan on table_name  (cost=0.00..488700.65 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 13;\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on http_result  (cost=0.00..488700.65 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 1;\n> > NOTICE:  QUERY PLAN:\n> > Index Scan using http_result_pk on table_name  (cost=0.00..5.01 rows=1\n> > width=57)\n> >\n> > hawkdb=#explain select * from table_name where time > '2002-10-10';\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on table_name  (cost=0.00..488700.65 rows=19649743 width=57)\n> >\n> > hawkdb=# explain select * from http_result where time < '2002-10-10';\n> > NOTICE:  QUERY PLAN:\n> > Index Scan using table_name_time on table_name  (cost=0.00..75879.17\n> > rows=19669 width=57)\n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]", "msg_date": "Mon, 18 Nov 2002 16:32:49 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "Hi Tom, \n\nYes, we should upgrade to 7.2 soon, its just that as it is a live system\nrunning 24x7 we are careful about upgrading core components so we do not\ndisrupt our data collection agents too much.\n\nHere is some table info, we currently index by time then ID. Generally,\ndata will be selected by ID, then time range. Clustering may help on this.\n\n\n\n Attribute | Type | Modifier\n-----------------+--------------------------+----------\n job_id | integer | not null\n server_id | integer | not null\n time | timestamp with time zone | not null\n availability | boolean |\n connection_time | integer |\n dns_setup | integer |\n server_response | integer |\n frontpage_size | integer |\n frontpage_time | integer |\n transfer_size | integer |\n transfer_time | integer |\n error_id | integer |\n redirect_time | integer |\n polling_id | integer | not null\nIndices: http_result_pk,\n http_timejobid\n\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 18 November 2002 16:25\nTo: Charles H. Woloszynski\nCc: Nikk Anderson; 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables \n\n\n\"Charles H. Woloszynski\" <[email protected]> writes:\n> Are you doing vaccums on these tables? I was under the understanding \n> that the estimated row count should be close to the real row count \n> returned, and when it is not (as it looks in your case), the primary \n> reason for the disconnect is that the stats for the tables are \n> out-of-date. \n\nThe fact that he's using 7.1 doesn't help any; the statistics mechanisms\nin 7.1 are pretty weak compared to 7.2.\n\n> Also, do you do any clustering of the data (since the queries are mostly \n> time limited)? I am wondering if the system is doing lots of seeks to \n> get the data (implying that the data is all over the disk and not \n> clustered).\n\nIt would also be interesting to try a two-column index ordered the other\nway (timestamp as the major sort key instead of ID). Can't tell if that\nwill be a win without more info about the data properties, but it's\nworth looking at.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] selects from large tables \n\n\nHi Tom, \n\nYes, we should upgrade to 7.2 soon, its just that as it is a live system running 24x7 we are careful about upgrading core components so we do not disrupt our data collection agents too much.\nHere is some table info, we currently index by time then ID.  Generally, data will be selected by ID, then time range.  Clustering may help on this.  \n\n    Attribute    |           Type           | Modifier\n-----------------+--------------------------+----------\n job_id          | integer                  | not null\n server_id       | integer                  | not null\n time            | timestamp with time zone | not null\n availability    | boolean                  |\n connection_time | integer                  |\n dns_setup       | integer                  |\n server_response | integer                  |\n frontpage_size  | integer                  |\n frontpage_time  | integer                  |\n transfer_size   | integer                  |\n transfer_time   | integer                  |\n error_id        | integer                  |\n redirect_time   | integer                  |\n polling_id      | integer                  | not null\nIndices: http_result_pk,\n         http_timejobid\n\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: 18 November 2002 16:25\nTo: Charles H. Woloszynski\nCc: Nikk Anderson; 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables \n\n\n\"Charles H. Woloszynski\" <[email protected]> writes:\n> Are you doing vaccums on these tables?  I was under the understanding \n> that the estimated row count should be close to the real row count \n> returned, and when it is not (as it looks in your case), the primary \n> reason for the disconnect is that the stats for the tables are \n> out-of-date.  \n\nThe fact that he's using 7.1 doesn't help any; the statistics mechanisms\nin 7.1 are pretty weak compared to 7.2.\n\n> Also, do you do any clustering of the data (since the queries are mostly \n> time limited)?  I am wondering if the system is doing lots of seeks to \n> get the data (implying that the data is all over the disk and not \n> clustered).\n\nIt would also be interesting to try a two-column index ordered the other\nway (timestamp as the major sort key instead of ID).  Can't tell if that\nwill be a win without more info about the data properties, but it's\nworth looking at.\n\n                        regards, tom lane", "msg_date": "Mon, 18 Nov 2002 16:36:44 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables " }, { "msg_contents": "You might want to check out 7.3 while your at it. It's currently\nplanned to be released around Dec 1st, which might fit in nicely with\nyour upgrade schedule.\n\nRobert Treat\n\nOn Mon, 2002-11-18 at 11:36, Nikk Anderson wrote:\n> Hi Tom, \n> \n> Yes, we should upgrade to 7.2 soon, its just that as it is a live system\n> running 24x7 we are careful about upgrading core components so we do not\n> disrupt our data collection agents too much.\n> \n> Here is some table info, we currently index by time then ID. Generally,\n> data will be selected by ID, then time range. Clustering may help on\n> this. \n> \n> \n> Attribute | Type | Modifier \n> -----------------+--------------------------+---------- \n> job_id | integer | not null \n> server_id | integer | not null \n> time | timestamp with time zone | not null \n> availability | boolean | \n> connection_time | integer | \n> dns_setup | integer | \n> server_response | integer | \n> frontpage_size | integer | \n> frontpage_time | integer | \n> transfer_size | integer | \n> transfer_time | integer | \n> error_id | integer | \n> redirect_time | integer | \n> polling_id | integer | not null \n> Indices: http_result_pk, \n> http_timejobid \n> \n> Thanks \n> \n> Nikk \n> \n> \n> -----Original Message----- \n> From: Tom Lane [ mailto:[email protected] <mailto:[email protected]> ] \n> Sent: 18 November 2002 16:25 \n> To: Charles H. Woloszynski \n> Cc: Nikk Anderson; 'Stephan Szabo'; [email protected] \n> Subject: Re: [PERFORM] selects from large tables \n> \n> \n> \"Charles H. Woloszynski\" <[email protected]> writes: \n> > Are you doing vaccums on these tables? I was under the understanding \n> > that the estimated row count should be close to the real row count \n> > returned, and when it is not (as it looks in your case), the primary \n> > reason for the disconnect is that the stats for the tables are \n> > out-of-date. \n> \n> The fact that he's using 7.1 doesn't help any; the statistics mechanisms\n> \n> in 7.1 are pretty weak compared to 7.2. \n> \n> > Also, do you do any clustering of the data (since the queries are\n> mostly \n> > time limited)? I am wondering if the system is doing lots of seeks to\n> \n> > get the data (implying that the data is all over the disk and not \n> > clustered). \n> \n> It would also be interesting to try a two-column index ordered the other\n> \n> way (timestamp as the major sort key instead of ID). Can't tell if that\n> \n> will be a win without more info about the data properties, but it's \n> worth looking at. \n> \n> regards, tom lane \n> \n\n\n\n", "msg_date": "18 Nov 2002 13:48:32 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "To whom it may concern,\n\nI am a java developer using postgres as a DB server. Me and my development team have a product comprised of about 50 tables, with about 10,000 records in the largest table. We have looked for concrete answers in books and the web for solutions to several problems that are plaguing us. Now we look to the source.\n\nIssue #1 Massive deletion of records.\n\nQ - If many (eg hundreds) records are deleted (purposely), those records get flagged for later removal. What is the best sequence of operations to optimize the database afterwards? Is it Vacuum, Re-index, then do a Vacuum Analyze.\n\nSome of what I have read suggests that doing a vacuum without a re-index, can cause a given index to be invalid (ie entries pointing to records that do not match the index criteria).\n\nThis would then suggest that doing a Vacuum Analyze would create an incorrect statistics table.\n\nAny help regarding the best maintenance policy, ramifications of mass deletions, vacuuming, and re-indexing would be most appreciated. Thanks\n\n\n\n---------------------------------\nDo you Yahoo!?\nYahoo! Web Hosting - Let the expert host your site\nTo whom it may concern,\nI am a java developer using postgres as a DB server.  Me and my development team have a product comprised of about 50 tables, with about 10,000 records in the largest table.  We have looked for concrete answers in books and the web for solutions to several problems that are plaguing us.  Now we look to the source.\nIssue #1 Massive deletion of records.\nQ - If many (eg hundreds) records are deleted (purposely), those records get flagged for later removal.  What is the best sequence of operations to optimize the database afterwards?  Is it Vacuum, Re-index, then do a Vacuum Analyze.\nSome of what I have read suggests that doing a vacuum without a re-index, can cause a given index to be invalid (ie entries pointing to records that do not match the index criteria).\nThis would then suggest that doing a Vacuum Analyze would create an incorrect statistics table.\nAny help regarding the best maintenance policy, ramifications of mass deletions, vacuuming, and re-indexing would be most appreciated.  ThanksDo you Yahoo!?\nYahoo! Web Hosting - Let the expert host your site", "msg_date": "Mon, 18 Nov 2002 19:02:40 -0800 (PST)", "msg_from": "Adrian Calvin <[email protected]>", "msg_from_op": true, "msg_subject": "Question regarding effects of Vacuum, Vacuum Analyze, and Reindex" }, { "msg_contents": "Moving thread to pgsql-performance.\n\nOn Mon, 2002-11-18 at 22:02, Adrian Calvin wrote:\n> Q - If many (eg hundreds) records are deleted (purposely), those\n> records get flagged for later removal. What is the best sequence of\n> operations to optimize the database afterwards? Is it Vacuum,\n> Re-index, then do a Vacuum Analyze.\n\nJust run a regular vacuum once for the above. If you modify 10%+ of the\ntable (via single or multiple updates, deletes or inserts) then a vacuum\nanalyze will be useful.\n\nRe-index when you change the tables contents a few times over. (Have\ndeleted or updated 30k entries in a table with 10k entries at any given\ntime).\n\n\nGeneral maintenance for a dataset of that size will probably simply be a\nnightly vacuum, weekly vacuum analyze, and annual reindex or dump /\nrestore (upgrades).\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "19 Nov 2002 08:46:39 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Question regarding effects of Vacuum, Vacuum" } ]
[ { "msg_contents": "Hello,\n\n does have IN operator in WHERE clausule any \"undocumented\" slowdown?\nI have tables:\n\nCREATE TABLE A (\n pkA int NOT NULL\n num int\n\t...\n);\n\nCREATE TABLE B (\n fkA int NOT NULL,\n\t...\n);\n\nALTER TABLE B ADD CONSTRAINT FK_B_fkB_A_pkA FOREIGN KEY (fkB) REFERENCES \nA(pkA);\n\n I have 3000 records in A and same number in B. For each record A I \nhave one record in B.\n When I do:\n\n\nDELETE FROM B WHERE fkB IN (SELECT pkA FROM A WHERE num=1)\n\n postgresql start working for 2-3 minutes. But select (from IN \nclausule) SELECT pkA FROM A WHERE num=1 end in few seconds. The slowest \nis this DELETE when IN SELECT returns no records.\n Does any have some idea whats wrong?\n\n\t\t\t\t\t\t\tJirka Novak\n\n", "msg_date": "Tue, 19 Nov 2002 15:26:09 +0100", "msg_from": "Jirka Novak <[email protected]>", "msg_from_op": true, "msg_subject": "Slow DELETE with IN clausule" }, { "msg_contents": "On Tuesday 19 Nov 2002 2:26 pm, Jirka Novak wrote:\n> Hello,\n>\n> does have IN operator in WHERE clausule any \"undocumented\" slowdown?\n\nYes, but I thought it was documented somewhere. It's certainly been discussed \non the lists. Search the archives for IN/EXISTS. If you can rewrite your \nquery with an EXISTS clause you should see a big improvement.\n\nhttp://archives.postgresql.org\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Nov 2002 14:36:19 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow DELETE with IN clausule" }, { "msg_contents": "On Tue, Nov 19, 2002 at 03:26:09PM +0100, Jirka Novak wrote:\n> Hello,\n> \n> does have IN operator in WHERE clausule any \"undocumented\" slowdown?\n> I have tables:\n\nIt's not undocumented:\n\nhttp://www.ca.postgresql.org/docs/faq-english.html#4.22\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 19 Nov 2002 09:39:39 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow DELETE with IN clausule" } ]
[ { "msg_contents": "Hi, \n\nI tried a test cluster on a copy of our real data - all 10 million rows or\nso. WOW! The normal select performance improved drastically. \nSelecting 3 months worth of data was taking 146 seconds to retrieve. After\nclustering it took 7.7 seconds! We are now looking into ways we can\nautomate clustering to keep the table up to date. The cluster itself took\naround 2.5 hours.\n\nAs our backend systems are writing hundreds of rows of data in per minute\ninto the table that needs clustering - will cluster handle locking the\ntables when dropping the old, and renaming the clustered data? What happens\nto the data being added to the table while cluster is running? Our backend\nsystems may have some problems if the table does not exist when it tries to\ninsert, and we don't want to lose any data.\n\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Charles H. Woloszynski [mailto:[email protected]]\nSent: 18 November 2002 15:46\nTo: Nikk Anderson\nCc: 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\nNikk:\n\nAre you doing vaccums on these tables? I was under the understanding \nthat the estimated row count should be close to the real row count \nreturned, and when it is not (as it looks in your case), the primary \nreason for the disconnect is that the stats for the tables are \nout-of-date. \n\nSince it used the indexes, I am not sure if the old stats are causing \nany issues, but I suspect they are not helping. \n\nAlso, do you do any clustering of the data (since the queries are mostly \ntime limited)? I am wondering if the system is doing lots of seeks to \nget the data (implying that the data is all over the disk and not \nclustered). \n\nCharlie\n\nNikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n>\n> The data in the table is actually synthetic, but the structure is the \n> same as our live system, and the queries are similar to those we \n> actually carry out. \n>\n> As the data was synthetic there was a bit of repetition (19 million \n> rows of repetition!! ) of the item used in the where clause, meaning \n> that most of the table was returned by the queries - oops! So, I have \n> done is some more realistic queries from our live system, and put the \n> time it takes, and the explain results. Just to note that the \n> explain's estimated number of rows is way out - its guesses are way \n> too low.\n>\n> Typically a normal query on our live system returns between 200 and \n> 30000 rows depending on the reports a user wants to generate. In \n> prior testing, we noted that using SELECT COUNT( .. was slower than \n> other queries, which is why we though we would test counts first.\n>\n>\n> Here are some more realistic results, which still take a fair whack of \n> time........\n>\n>\n> Starting query 0\n> Query 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 697 ms\n> Index Scan using http_timejobid on xx (cost=0.00..17.01 rows=4 width=57)\n> This query returns 500 rows of data\n>\n>\n> Starting query 1\n> Query 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 15 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..705.57 rows=175 \n> width=57)\n> This query return 3582 rows\n>\n> Starting query 2\n> Query 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335;\n> Time taken = 65 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..3327.55 rows=832 \n> width=57)\n> This query returns 15692 rows\n>\n> Starting query 3\n> Query 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 \n> 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\n>\n> Time taken = 241 seconds\n> Index Scan using http_timejobid on xx (cost=0.00..10111.36 rows=2547 \n> width=57)\n> This query returns 48768 rows\n>\n>\n> Cheers\n>\n> Nikk\n>\n>\n>\n>\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: 18 November 2002 13:02\n> To: Nikk Anderson\n> Cc: [email protected]\n> Subject: Re: [PERFORM] selects from large tables\n>\n>\n>\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n>\n> > Any ideas on how we can select data more quickly from large tables?\n>\n> Are these row estimates realistic? It's estimating nearly 20 million rows\n> to be returned by some of the queries (unless I'm misreading the\n> number - possible since it's 5am here). At that point you almost\n> certainly want to be using a cursor rather than plain queries since even a\n> small width result (say 50 bytes) gives a very large (1 gig) result set.\n>\n> > - Queries and explain plans\n> >\n> > select count(*) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select count(job_id) from table_name;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=488700.65..488700.65 rows=1 width=4)\n> > -> Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 \n> width=4)\n> >\n> > hawkdb=# explain select * from table_name;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..439527.12 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select count(*) from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Aggregate (cost=537874.18..537874.18 rows=1 width=0)\n> > -> Seq Scan on table_name (cost=0.00..488700.65 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 13;\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on http_result (cost=0.00..488700.65 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 1;\n> > NOTICE: QUERY PLAN:\n> > Index Scan using http_result_pk on table_name (cost=0.00..5.01 rows=1\n> > width=57)\n> >\n> > hawkdb=#explain select * from table_name where time > '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Seq Scan on table_name (cost=0.00..488700.65 rows=19649743 width=57)\n> >\n> > hawkdb=# explain select * from http_result where time < '2002-10-10';\n> > NOTICE: QUERY PLAN:\n> > Index Scan using table_name_time on table_name (cost=0.00..75879.17\n> > rows=19669 width=57)\n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n\n\nRE: [PERFORM] selects from large tables\n\n\nHi, \n\nI tried a test cluster on a copy of our real data - all 10 million rows or so.  WOW!   The normal select performance improved drastically.  \nSelecting 3 months worth of data was taking 146 seconds to retrieve.  After clustering it took 7.7 seconds!  We are now looking into ways we can automate clustering to keep the table up to date.  The cluster itself took around 2.5 hours.\nAs our backend systems are writing hundreds of rows of data in per minute into the table that needs clustering - will cluster handle locking the tables when dropping the old, and renaming the clustered data?  What happens to the data being added to the table while cluster is running? Our backend systems may have some problems if the table does not exist when it tries to insert, and we don't want to lose any data.\nThanks\n\nNikk\n\n\n-----Original Message-----\nFrom: Charles H. Woloszynski [mailto:[email protected]]\nSent: 18 November 2002 15:46\nTo: Nikk Anderson\nCc: 'Stephan Szabo'; [email protected]\nSubject: Re: [PERFORM] selects from large tables\n\n\nNikk:\n\nAre you doing vaccums on these tables?  I was under the understanding \nthat the estimated row count should be close to the real row count \nreturned, and when it is not (as it looks in your case), the primary \nreason for the disconnect is that the stats for the tables are \nout-of-date.  \n\nSince it used the indexes, I am not sure if the old stats are causing \nany issues, but I suspect they are not helping.  \n\nAlso, do you do any clustering of the data (since the queries are mostly \ntime limited)?  I am wondering if the system is doing lots of seeks to \nget the data (implying that the data is all over the disk and not \nclustered).  \n\nCharlie\n\nNikk Anderson wrote:\n\n> Hi,\n> Thanks for the reply Stephen, the data is 'somewhat' realistic.....\n>\n> The data in the table is actually synthetic, but the structure is the \n> same as our live system, and the queries are similar to those we \n> actually carry out. \n>\n> As the data was synthetic there was a bit of repetition (19 million \n> rows of repetition!! ) of the item used in the where clause, meaning \n> that most of the table was returned by the queries - oops!  So, I have \n> done is some more realistic queries from our live system, and put the \n> time it takes, and the explain results.  Just to note that the \n> explain's estimated number of rows is way out - its guesses are way \n> too low.\n>\n> Typically a normal query on our live system returns between 200 and \n> 30000 rows depending on the reports a user wants to generate.  In \n> prior testing, we noted that using SELECT COUNT( ..   was slower than \n> other queries, which is why we though we would test counts first.\n>\n>\n> Here are some more realistic results, which still take a fair whack of \n> time........\n>\n>\n> Starting query 0\n> Query 0: SELECT * FROM xx WHERE time BETWEEN '2002-11-17 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 697 ms\n> Index Scan using http_timejobid on xx  (cost=0.00..17.01 rows=4 width=57)\n> This query returns 500 rows of data\n>\n>\n> Starting query 1\n> Query 1: SELECT * FROM xx WHERE time BETWEEN '2002-11-11 14:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335\n> Time taken = 15 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..705.57 rows=175 \n> width=57)\n> This query return 3582 rows\n>\n> Starting query 2\n> Query 2: SELECT * FROM xx WHERE time BETWEEN '2002-10-19 15:08:58.021' \n> AND '2002-11-18 14:08:58.021' AND job_id = 335;\n> Time taken = 65 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..3327.55 rows=832 \n> width=57)\n> This query returns 15692 rows\n>\n> Starting query 3\n> Query 3: SELECT * FROM xx_result WHERE time BETWEEN '2002-08-20 \n> 15:08:58.021' AND '2002-11-18 14:08:58.021' AND job_id = 335;\n>\n> Time taken = 241 seconds\n> Index Scan using http_timejobid on xx  (cost=0.00..10111.36 rows=2547 \n> width=57)\n> This query returns 48768 rows\n>\n>\n> Cheers\n>\n> Nikk\n>\n>\n>\n>\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: 18 November 2002 13:02\n> To: Nikk Anderson\n> Cc: [email protected]\n> Subject: Re: [PERFORM] selects from large tables\n>\n>\n>\n> On Mon, 18 Nov 2002, Nikk Anderson wrote:\n>\n> > Any ideas on how we can select data more quickly from large tables?\n>\n> Are these row estimates realistic? It's estimating nearly 20 million rows\n> to be returned by some of the queries (unless I'm misreading the\n> number - possible since it's 5am here).  At that point you almost\n> certainly want to be using a cursor rather than plain queries since even a\n> small width result (say 50 bytes) gives a very large (1 gig) result set.\n>\n> > - Queries and explain plans\n> >\n> > select count(*) from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=488700.65..488700.65 rows=1 width=0)\n> >   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select count(job_id) from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=488700.65..488700.65 rows=1 width=4)\n> >   ->  Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 \n> width=4)\n> >\n> > hawkdb=# explain select * from table_name;\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on table_name  (cost=0.00..439527.12 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select count(*) from table_name where job_id = 13;\n> > NOTICE:  QUERY PLAN:\n> > Aggregate  (cost=537874.18..537874.18 rows=1 width=0)\n> >   ->  Seq Scan on table_name  (cost=0.00..488700.65 rows=19669412 \n> width=0)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 13;\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on http_result  (cost=0.00..488700.65 rows=19669412 width=57)\n> >\n> > hawkdb=# explain select * from table_name where job_id = 1;\n> > NOTICE:  QUERY PLAN:\n> > Index Scan using http_result_pk on table_name  (cost=0.00..5.01 rows=1\n> > width=57)\n> >\n> > hawkdb=#explain select * from table_name where time > '2002-10-10';\n> > NOTICE:  QUERY PLAN:\n> > Seq Scan on table_name  (cost=0.00..488700.65 rows=19649743 width=57)\n> >\n> > hawkdb=# explain select * from http_result where time < '2002-10-10';\n> > NOTICE:  QUERY PLAN:\n> > Index Scan using table_name_time on table_name  (cost=0.00..75879.17\n> > rows=19669 width=57)\n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]", "msg_date": "Wed, 20 Nov 2002 15:08:11 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "Nikk Anderson <[email protected]> writes:\n> As our backend systems are writing hundreds of rows of data in per minute\n> into the table that needs clustering - will cluster handle locking the\n> tables when dropping the old, and renaming the clustered data? What happens\n> to the data being added to the table while cluster is running?\n\nNothing, because there won't be any: cluster acquires exclusive lock on\nthe table while it runs. Any would-be inserter will block till it's done.\n\nIf you are clustering by timestamp of insertion, and you never update or\ndelete rows, then I think it's a one-time-and-you're-done kind of task\nanyway --- newly inserted rows will always get added at the end, and so\nwill be in timestamp order anyway. But if you need to update the table\nthen things aren't so nice :-(\n\n\t\t\tregards, tom lane\n\nPS: it's not really necessary to quote the entire thread in every\nmessage, and it's definitely not nice to do so twice in both plain\ntext and HTML :-(. Please have some consideration for the size of\nyour emails that Marc is archiving for posterity ...\n", "msg_date": "Wed, 20 Nov 2002 10:16:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables " }, { "msg_contents": "Nikk Anderson wrote:\n> Hi, \n> \n> I tried a test cluster on a copy of our real data - all 10 million rows or\n> so. WOW! The normal select performance improved drastically. \n> Selecting 3 months worth of data was taking 146 seconds to retrieve. After\n> clustering it took 7.7 seconds! We are now looking into ways we can\n> automate clustering to keep the table up to date. The cluster itself took\n> around 2.5 hours.\n> \n> As our backend systems are writing hundreds of rows of data in per minute\n> into the table that needs clustering - will cluster handle locking the\n> tables when dropping the old, and renaming the clustered data? What happens\n> to the data being added to the table while cluster is running? Our backend\n> systems may have some problems if the table does not exist when it tries to\n> insert, and we don't want to lose any data.\n\nCLUSTER will exclusively lock the table from read/write during the\nCLUSTER. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 20 Nov 2002 10:18:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "On Wed, 2002-11-20 at 10:08, Nikk Anderson wrote:\n> Hi, \n> \n> I tried a test cluster on a copy of our real data - all 10 million\n> rows or so. WOW! The normal select performance improved\n> drastically. \n> \n> Selecting 3 months worth of data was taking 146 seconds to retrieve. \n> After clustering it took 7.7 seconds! We are now looking into ways we\n> can automate clustering to keep the table up to date. The cluster\n> itself took around 2.5 hours.\n> \n> As our backend systems are writing hundreds of rows of data in per\n> minute into the table that needs clustering - will cluster handle\n> locking the tables when dropping the old, and renaming the clustered\n> data? What happens to the data being added to the table while cluster\n> is running? Our backend systems may have some problems if the table\n> does not exist when it tries to insert, and we don't want to lose any\n> data.\n\nThe table will be locked while cluster is running. Meaning, any new\ndata will have to sit and wait.\n\nCluster won't buy much on a mostly clustered table. But it's probably\nworth it for you to do it when 20% of the tuples turnover (deleted,\nupdated, inserts, etc).\n\n\nI'm a little curious to know when the last time you had run a VACUUM\nFULL on that table was.\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "20 Nov 2002 10:31:05 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "Nikk Anderson kirjutas K, 20.11.2002 kell 20:08:\n> Hi, \n> \n> I tried a test cluster on a copy of our real data - all 10 million\n> rows or so. WOW! The normal select performance improved\n> drastically. \n> \n> Selecting 3 months worth of data was taking 146 seconds to retrieve. \n> After clustering it took 7.7 seconds! We are now looking into ways we\n> can automate clustering to keep the table up to date. The cluster\n> itself took around 2.5 hours.\n> \n> As our backend systems are writing hundreds of rows of data in per\n> minute into the table that needs clustering - will cluster handle\n> locking the tables when dropping the old, and renaming the clustered\n> data? What happens to the data being added to the table while cluster\n> is running? Our backend systems may have some problems if the table\n> does not exist when it tries to insert, and we don't want to lose any\n> data.\n\nYou could use a staging table that takes all the inserts and the\ncontents of which are moved (begin;insert into big select from\nsmall;delete from small;commit;vacuum full small;) to the main table\nonce a day (or week or month) just before clustering the big one.\n\nThen do all your selects from a UNION view on both - thus you have a big\nfast clustered table and non-clustered \"live\" table which stays small.\nThat should make your selects fast(er).\n\n-----------------\nHannu\n\n\n", "msg_date": "21 Nov 2002 01:26:22 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "Hi, \n> I'm a little curious to know when the last time you had run a VACUUM\n> FULL on that table was.\n>\n>-- \n>Rod Taylor <[email protected]>\n\nWe do a VACUUM ANALYZE every night, there is no option for FULL on our\nversion (7.1)\n\nNikk\n\n\n\n\n\nRE: [PERFORM] selects from large tables\n\n\nHi, \n> I'm a little curious to know when the last time you had run a VACUUM\n> FULL on that table was.\n>\n>-- \n>Rod Taylor <[email protected]>\n\nWe do a VACUUM ANALYZE every night, there is no option for FULL on our version (7.1)\n\nNikk", "msg_date": "Wed, 20 Nov 2002 15:49:26 -0000", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: selects from large tables" }, { "msg_contents": "> We do a VACUUM ANALYZE every night, there is no option for FULL on our\n> version (7.1)\n\nOh, I see. Old stuff :)\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "20 Nov 2002 10:56:19 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: selects from large tables" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Wed, 20 Nov 2002 13:58:49 -0500 (EST)", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "subscribe [email protected]\n\n", "msg_date": "Thu, 21 Nov 2002 15:47:51 -0500 (EST)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "None" } ]
[ { "msg_contents": "Direct Cc: would be MUCH appreciated!\n\nI'm using PostgreSQL 7.1.3\n\nMy FIRST question is:\n\nHow come I can't seem to get any of that nifty profiling output to\n/var/log/messages?\n\n[aside]\nNot in /var/log/pgsql nor in /var/lib/pgsql/data/pg.log either. Don't\ncare where it goes, so long as I can find it... While I realize that this\nis very configurable, some \"clues\" to newbies about the usual places would\nhave been most welcome in the docs.\n[/aside]\n\n\nI have:\n Altered postgresql.conf to turn \"on\" the show_query_stats (et al) as\nwell as syslog = 2\n Altered /etc/rc.d/init/postgresql to be:\nsu -l postgres -s /bin/sh -c \"/usr/bin/pg_ctl -D $PGDATA -o '-i -s ' -p\n/usr/bin/postmaster start > /dev/null 2>&1\" < /dev/null\n Altered /var/lib/pgsql/postmaster.opts to be:\n/usr/bin/postmaster '-D' '/var/lib/pgsql/data' '-i' '-s'\n\nOnce I'm in psql, I use SET to turn them on as well.\n\nThis resulted in all my ERROR and NOTICE messages going into\n/var/log/messages, but *NOT* any sort of nifty query analysis type stuff.\n\nSo what did I miss? Is there another client/server spot where I need to\nget that '-s' in there?\n\nIs there another switch to actually kick-start it? The docs are probably\nreal clear to y'all, but I'm obviously missing something simple here...\n\n\nOf course, the root problem is a monster query that suddenly takes far far\ntoo long...\n\nI realize that I'm trying to do a full-text search, *BUT* a similar query\n\"works fine\"...\n\nWhy does this take minutes:\n\nSELECT DISTINCT *, 0 + (0 + 10 * (lower(title) like '%einstein%') ::int +\n10 * (lower(author_flattened) like '%einstein%') ::int + 30 *\n(lower(subject_flattened) like '%einstein%') ::int + 30 * (lower(text)\nLIKE '%einstein%') ::int + 9 * (substring(lower(title), 1, 20) like\n'%einstein%') ::int + 25 * (substring(lower(text), 1, 20) LIKE\n'%einstein%') ::int ) AS points FROM article WHERE TRUE AND (FALSE OR\n(lower(title) like '%einstein%') OR (lower(author_flattened) like\n'%einstein%') OR (lower(subject_flattened) like '%einstein%') OR\n(lower(text) LIKE '%einstein%') ) ORDER BY points desc, volume, number,\narticle.article LIMIT 10, 0\n\nwhile this takes seconds:\n\nSELECT *, 0 + 3 * ( title like '%Einstein%' )::int + 3 * ( author like\n'%Einstein%' )::int + ( ( 1 + 1 * ( lower(text) like '%einstein%' )::int )\n+ ( 0 + ( subject like '%Einstein%' )::int ) ) AS points FROM article\nWHERE TRUE AND title like '%Einstein%' AND author like '%Einstein%' AND (\n( TRUE AND lower(text) like '%einstein%' ) OR ( FALSE OR subject like\n'%Einstein%' ) ) ORDER BY points desc, volume, number, article.article\nLIMIT 10, 0\n\n\nIs it the function calls to lower() which I have yet to implement on the\nsecond query?\n\nIs it the sheer number of rows being returned?\n\nDo a lot of \"OR\" sub-parts to the WHERE drag it down?\n\nArticle has ~17000 records in it.\nThe 'text' field is the actual contents of a magazine article.\n\nI would ask if it was the ~* (REGEXP) but that hasn't even kicked in for\nthis single-term ('Einstein') input! :-^\n\nWe're talking about minutes instead of seconds here.\n\nAll fields are of type 'text'\n\nVACUUM VERBOSE ANALYZE is running nightly\n\n/proc/cpuinfo sez:\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 11\nmodel name : Intel(R) Pentium(R) III CPU family 1400MHz\nstepping : 1\ncpu MHz : 1406.005\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 mmx fxsr sse\nbogomips : 2804.94\n\nFinally, any \"rules of thumb\" about that one 512 RAM size thingie in\npostmaster.conf would be especially appreciated...\n\nIf you're willing to actually poke at the search engine with other inputs,\nI'd be happy to provide a URL off-list.\n\n\n\n", "msg_date": "Wed, 20 Nov 2002 15:40:46 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Query Analysis" }, { "msg_contents": "\"typea\":\n\n> Why does this take minutes:\n> \n> SELECT DISTINCT *, 0 + (0 + 10 * (lower(title) like '%einstein%') ::int +\n> 10 * (lower(author_flattened) like '%einstein%') ::int + 30 *\n> (lower(subject_flattened) like '%einstein%') ::int + 30 * (lower(text)\n> LIKE '%einstein%') ::int + 9 * (substring(lower(title), 1, 20) like\n> '%einstein%') ::int + 25 * (substring(lower(text), 1, 20) LIKE\n> '%einstein%') ::int ) AS points FROM article WHERE TRUE AND (FALSE OR\n> (lower(title) like '%einstein%') OR (lower(author_flattened) like\n> '%einstein%') OR (lower(subject_flattened) like '%einstein%') OR\n> (lower(text) LIKE '%einstein%') ) ORDER BY points desc, volume, number,\n> article.article LIMIT 10, 0\n> \n> while this takes seconds:\n> \n> SELECT *, 0 + 3 * ( title like '%Einstein%' )::int + 3 * ( author like\n> '%Einstein%' )::int + ( ( 1 + 1 * ( lower(text) like '%einstein%' )::int )\n> + ( 0 + ( subject like '%Einstein%' )::int ) ) AS points FROM article\n> WHERE TRUE AND title like '%Einstein%' AND author like '%Einstein%' AND (\n> ( TRUE AND lower(text) like '%einstein%' ) OR ( FALSE OR subject like\n> '%Einstein%' ) ) ORDER BY points desc, volume, number, article.article\n> LIMIT 10, 0\n\nIt's probably mostly the SELECT DISTINCT, which aggregates records and is \ntherefore slow. Try running EXPLAIN ANALYZE to see what steps are actually \ntaking the time.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 20 Nov 2002 15:52:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Analysis" }, { "msg_contents": "Since it's 7.1.3 I don't have the \"ANALYZE\" bit in EXPLAIN, but:\n\narchive_beta=> explain SELECT DISTINCT *, 0 + (0 + 10 * (lower(title) like\n'%einstein%') ::int + 10 * (lower(author_flattened) like '%einstein%')\n::int + 30 * (lower(subject_flattened) like '%einstein%') ::int + 30 *\n(lower(text) LIKE '%einstein%') ::int + 9 * (substring(lower(title), 1,\n20) like '%einstein%') ::int + 25 * (substring(lower(text), 1, 20) LIKE\n'%einstein%') ::int ) AS points FROM article WHERE TRUE AND (FALSE OR\n(lower(title) like '%einstein%') OR (lower(author_flattened) like\n'%einstein%') OR (lower(subject_flattened) like '%einstein%') OR\n(lower(text) LIKE '%einstein%') ) ORDER BY points desc, volume, number,\narticle.article LIMIT 10, 0 ;\nNOTICE: QUERY PLAN:\n\nLimit (cost=1374.97..1375.02 rows=1 width=212)\n -> Unique (cost=1374.97..1375.02 rows=1 width=212)\n -> Sort (cost=1374.97..1374.97 rows=1 width=212)\n -> Seq Scan on article (cost=0.00..1374.96 rows=1 width=212)\n\nEXPLAIN\narchive_beta=> explain SELECT *, 0 + 3 * ( title like '%Einstein%' )::int\n+ 3 * ( author like '%Einstein%' )::int + ( ( 1 + 1 * ( lower(text) like\n'%einstein%' )::int ) + ( 0 + ( subject like '%Einstein%' )::int ) ) AS\npoints FROM article WHERE TRUE AND title like '%Einstein%' AND author\nlike '%Einstein%' AND ( ( TRUE AND lower(text) like '%einstein%' ) OR (\nFALSE OR subject like '%Einstein%' ) ) ORDER BY points desc, volume,\nnumber, article.article LIMIT 10, 0;\nNOTICE: QUERY PLAN:\n\nLimit (cost=1243.48..1243.48 rows=1 width=212)\n -> Sort (cost=1243.48..1243.48 rows=1 width=212)\n -> Seq Scan on article (cost=0.00..1243.47 rows=1 width=212)\n\nWhile the first one is higher, these two do not seem drastically different\nto me -- Those numbers are accumulative, right? So the top row is my\n\"final answer\" The extra Unique row doesn't seem to be adding\nsignificantly to the numbers as far as EXPLAIN can tell...\n\nAnd yet the queries are orders of magnitude apart in actual performance.\n\n'Course, I don't claim to completely understand the output of EXPLAIN yet\neither.\n\nI also took out the DISTINCT in the first one, just to test. It was\ncertainly \"faster\" but not nearly so much that it \"caught up\" to the other\nquery.\n\nThanks in advance for any help!\n\n\n\n", "msg_date": "Thu, 21 Nov 2002 08:16:14 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Analysis" }, { "msg_contents": "I think I've narrowed down my problem space a bit.\n\nPlaying with various \"fast\" versus \"slow\" queries leads me to ask:\n\nGIVEN:\n\n15,000 reocrds with a 'text' field named 'text'\nAverage 'text' length about 10 K.\nFull text search using lower() and LIKE and even ~* sometimes on that\nfield with a keyword.\n\nWhat can be done to maximize performance on such large chunks of text?\n\nMore RAM?\nTweak that 512 number in postmaster.conf?\nFaster CPU?\nIs my only option to resort to a concordance?\n\n\n\n", "msg_date": "Thu, 21 Nov 2002 11:21:26 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Analysis" }, { "msg_contents": "<[email protected]> writes:\n> 15,000 reocrds with a 'text' field named 'text'\n> Average 'text' length about 10 K.\n> Full text search using lower() and LIKE and even ~* sometimes on that\n> field with a keyword.\n\nConsider using a full-text-indexing method (there are a couple in\ncontrib, and OpenFTS has a website that was mentioned recently in\nthe mail lists).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 14:41:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Analysis " } ]
[ { "msg_contents": "Hi!\nThis is my first post to the list.\n\nI'm currently searching to improve the performances of some requests,\nand recently switched to postgresql 7.3rc1.\n\nI thought there would be in this release a kind of cache for the results\nof requests (i.e. the second time a request is asked, if the concerned\ntables haven't changed, the result of the request isn't calculated another time, but\ntaken from a cache.) ?\nAm i wrong ?\nDoes such a mechanism exist ?\nIs it planned to create one ?\n\nThanks for your help.\n\nBest regards,\nDavid\n-- \[email protected]\n", "msg_date": "Thu, 21 Nov 2002 10:28:38 +0100", "msg_from": "David Pradier <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a system of cache in pgsql 7.3rc1 ?" }, { "msg_contents": "On Thu, 21 Nov 2002, David Pradier wrote:\n\n> Hi!\n> This is my first post to the list.\n> \n> I'm currently searching to improve the performances of some requests,\n> and recently switched to postgresql 7.3rc1.\n> \n> I thought there would be in this release a kind of cache for the results\n> of requests (i.e. the second time a request is asked, if the concerned\n> tables haven't changed, the result of the request isn't calculated another time, but\n> taken from a cache.) ?\n> Am i wrong ?\n> Does such a mechanism exist ?\n> Is it planned to create one ?\n\nThis issue has been discussed. The performance gains from a results cache \nare not all that great, and postgresql's mvcc \"locking\" mechanism isn't a \ngood candidate to be served by results caching / updating. Generally \nspeaking, if you've got enough memory in your box, then the results are \n\"cached\" in memory, requiring only sorting before being output.\n\nThis is a niche problem that is not likely to be implemented any time \nsoon.\n\n", "msg_date": "Thu, 21 Nov 2002 10:38:20 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a system of cache in pgsql 7.3rc1 ?" } ]
[ { "msg_contents": "\nHi folks,\n\nI have two options:\n3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\nand \n2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n\nDoes anyone opinions *performance wise* the pros and cons of above \ntwo options.\n\nplease take in consideration in latter case its higher RPM and better\nSCSI interface. \n\n\n\nRegds\nMallah.\n\n\n\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Thu, 21 Nov 2002 22:15:02 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <[email protected]>", "msg_from_op": true, "msg_subject": "H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "How are you going to make use of the three faster drives under \npostgresql? Were you intending to put the WAL, system/swap, and the \nactual data files on separate drives/partitions? Unless you do \nsomething like that (or s/w RAID to distribute the processing across the \ndisks), you really have ONE SCSI 15K Ultra320 drive against 3 slower \ndrives with the RAID overhead (and spreading of performance because of \nthe multiple heads).\n\nI don't have specifics here, but I'd expect that the RAID5 on slower \ndrives would work better for apps with lots of selects or lots of \nconcurrent users. I suspect that the Ultra320 would be better for batch \njobs and mostly transactions with less selects.\n\nCharlie\n\nRajesh Kumar Mallah. wrote:\n\n>Hi folks,\n>\n>I have two options:\n>3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n>and \n>2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n>\n>Does anyone opinions *performance wise* the pros and cons of above \n>two options.\n>\n>please take in consideration in latter case its higher RPM and better\n>SCSI interface. \n>\n>\n>\n>Regds\n>Mallah.\n>\n>\n>\n>\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Thu, 21 Nov 2002 12:06:03 -0500", "msg_from": "\"Charles H. Woloszynski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster" }, { "msg_contents": "raid 0 (striping) spreads the load over multiple spindels, the same way raid 5 \ndoes. but raid 5 always needs to calculate parity and write that to it's \nparity drive.\n\nRPM isn't that critical, a lot depends on the machine, the processor and the \nmemory (and the spped with which the processor can get to the memory). I have \nrecently tested a lot of systems with some database benchmarks we wrote here \nat work. We're not running Postgres here at work, sorry, these benchmarks are \nof no use to Postgres ...\nWe we found is that a lot depends on motherboard design, not so much on drive \nspeed. We got to stages where we allocated 1.8 GB of RAM to shared memory for \nthe database server process, resulting in the entire database being sucked \ninto memory. When doing reads, 100% of the data is coming out the that \nmenory, and drive speed becomes irrelevant.\n\n From tests I did with Postgres on my boxes at home, I can say: The more shared \nmemory you can throw at the server process, the better. Under MacOS X I \nwasn't able to allocate more than 3 MB, Under Linux, I can allocate anything \nI want to, so I usually start up the server with 256 MB. The difference? A \nprocess which takes 4 minutes under Linux, takes 6 hours under MacOS - same \nhardware, same drives, different memory settings.\n\nBest regards,\nChris\n\nOn Thursday 21 November 2002 12:02, you wrote:\n> Thanks Chris,\n>\n> does raid0 enhances both read/write both?\n>\n> does rpms not matter that much?\n>\n> regds\n> mallah.\n>\n> On Thursday 21 November 2002 22:27, you wrote:\n> > RAID 5 gives you pretty bad performance, a slowdown of about 50%. For\n> > pure performance, I'd use the 3 18 GB drives with RAID 0.\n> >\n> > If you need fault tolerance, you could use RAID 0+1 or 1+0 but you'd need\n> > an even number of drives for that, of which half would become 'usable\n> > space'.\n> >\n> > Best regards,\n> > Chris\n> >\n> > On Thursday 21 November 2002 11:45, you wrote:\n> > > Hi folks,\n> > >\n> > > I have two options:\n> > > 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> > > and\n> > > 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n> > >\n> > > Does anyone opinions *performance wise* the pros and cons of above\n> > > two options.\n> > >\n> > > please take in consideration in latter case its higher RPM and better\n> > > SCSI interface.\n> > >\n> > >\n> > >\n> > > Regds\n> > > Mallah.\n\n-- \nNetwork Grunt and Bit Pusher extraordinaire\n\n", "msg_date": "Thu, 21 Nov 2002 12:19:35 -0500", "msg_from": "Chris Ruprecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "On Thu, 21 Nov 2002, Rajesh Kumar Mallah. wrote:\n\n> \n> Hi folks,\n> \n> I have two options:\n> 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> and \n> 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n> \n> Does anyone opinions *performance wise* the pros and cons of above \n> two options.\n> \n> please take in consideration in latter case its higher RPM and better\n> SCSI interface. \n\nDoes the OS you're running on support software RAID? If so the dual 36 \ngigs in a RAID0 software would be fastest, and in a RAID1 would still be \npretty fast plus they would be redundant.\n\nDepending on your queries, there may not be a lot of difference between \nrunning the 3*18 hw RAID or the 2*36 setup, especially if most of your \ndata can fit into memory on the server.\n\nGenerally, the 2*36 should be faster for writing, and the 3*18 should be \nabout even for reads, maybe a little faster.\n\n", "msg_date": "Thu, 21 Nov 2002 10:32:05 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster" }, { "msg_contents": "\n\nOh i did not mention,\nits linux, it does.\n\nRAM: 2.0 GB\nCPU: Dual 2.0 Ghz Intel Xeon DP Processors.\n\n\nOn Thursday 21 November 2002 23:02, scott.marlowe wrote:\n> On Thu, 21 Nov 2002, Rajesh Kumar Mallah. wrote:\n> > Hi folks,\n> >\n> > I have two options:\n> > 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> > and\n> > 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n> >\n> > Does anyone opinions *performance wise* the pros and cons of above\n> > two options.\n> >\n> > please take in consideration in latter case its higher RPM and better\n> > SCSI interface.\n>\n> Does the OS you're running on support software RAID? If so the dual 36\n> gigs in a RAID0 software would be fastest, and in a RAID1 would still be\n> pretty fast plus they would be redundant.\n\n>\n> Depending on your queries, there may not be a lot of difference between\n> running the 3*18 hw RAID or the 2*36 setup, especially if most of your\n> data can fit into memory on the server.\n> Generally, the 2*36 should be faster for writing, and the 3*18 should be\n> about even for reads, maybe a little faster.\n\nSince i got lots of RAM and my Data Size (on disk ) is 2 GB i feel frequent reads\ncan happen from the memory.\n\n\nI have heard putting pg_xlog in a drive of its own helps in boosting updates to \nDB server.\nin that case shud i forget abt the h/w and use one disk exclusively for the WAL?\n\n\nRegds\nmallah.\n\n\n\n\n\n\n\n\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Thu, 21 Nov 2002 23:16:55 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "\n\nOK now i am reading Momjian's \"PostgreSQL Hardware Performance Tuning\" \nonce again ;-)\n\nmallah.\n\n\nOn Thursday 21 November 2002 23:02, scott.marlowe wrote:\n> On Thu, 21 Nov 2002, Rajesh Kumar Mallah. wrote:\n> > Hi folks,\n> >\n> > I have two options:\n> > 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> > and\n> > 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n> >\n> > Does anyone opinions *performance wise* the pros and cons of above\n> > two options.\n> >\n> > please take in consideration in latter case its higher RPM and better\n> > SCSI interface.\n>\n> Does the OS you're running on support software RAID? If so the dual 36\n> gigs in a RAID0 software would be fastest, and in a RAID1 would still be\n> pretty fast plus they would be redundant.\n>\n> Depending on your queries, there may not be a lot of difference between\n> running the 3*18 hw RAID or the 2*36 setup, especially if most of your\n> data can fit into memory on the server.\n>\n> Generally, the 2*36 should be faster for writing, and the 3*18 should be\n> about even for reads, maybe a little faster.\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Thu, 21 Nov 2002 23:28:43 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "I had long labored under the impression that RAID 5 should give me better \nperformance but I have since encountered many reports that this is not the \ncase. Do some searching on Google and you will probably find numerous \narticles.\n\nNote 3x18 w/RAID5 will give 36GB usable while 2x36 w/o RAID is 72GB. \nYou could use mirroring on the 2x36 and have the same usable space.\n\nA mirrored 2x36 setup will probably yield a marginal hit on writes (vs a \nsingle disk) and an improvement on reads due to having two drives to read \nfrom and will (based on the Scientific Wild Ass Guess method and knowing \nnothing about your overall system) probably be faster than the RAID5 \nconfiguration while giving you identical usable space and data safety.\n\nYou also may see improvements due to the 15,000RPM drives (of course RPM is \nsort of an arbitrary measure - you really want to know about track access \ntimes, latency, transfer rate, etc. and RPM is just one influencing factor \nfor the above).\n\nThe quality of your RAID cards will also be important (how fast do they \nperform their calculations, how much buffer do they have) as will the overall \nspecs of you system. If you have a bottleneck somewhere other than your raw \ndisk I/O then you can throw all the money you want at faster drives and see \nno improvement.\n\nCheers,\nSteve\n\n\nOn Thursday 21 November 2002 8:45 am, you wrote:\n> Hi folks,\n>\n> I have two options:\n> 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> and\n> 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n>\n> Does anyone opinions *performance wise* the pros and cons of above\n> two options.\n>\n> please take in consideration in latter case its higher RPM and better\n> SCSI interface.\n>\n>\n>\n> Regds\n> Mallah.\n", "msg_date": "Thu, 21 Nov 2002 10:56:29 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "\n\nThanks Steve,\n\nrecently i have come to know that i can only get 3*18 GB ultra160 10K\nhraddrives,\n\nmy OS is lunux , other parameters are\nRAM:2GB , CPU:2*2Ghz Xeon,\n\ni feel i will do away with raid use one disk for the OS \nand pg_dumps\n\n, one for tables and last one for WAL , does this sound good?\n\nregds\nmallah.\n\n\nOn Friday 22 November 2002 00:26, Steve Crawford wrote:\n> I had long labored under the impression that RAID 5 should give me better\n> performance but I have since encountered many reports that this is not the\n> case. Do some searching on Google and you will probably find numerous\n> articles.\n>\n> Note 3x18 w/RAID5 will give 36GB usable while 2x36 w/o RAID is 72GB.\n> You could use mirroring on the 2x36 and have the same usable space.\n>\n> A mirrored 2x36 setup will probably yield a marginal hit on writes (vs a\n> single disk) and an improvement on reads due to having two drives to read\n> from and will (based on the Scientific Wild Ass Guess method and knowing\n> nothing about your overall system) probably be faster than the RAID5\n> configuration while giving you identical usable space and data safety.\n>\n> You also may see improvements due to the 15,000RPM drives (of course RPM is\n> sort of an arbitrary measure - you really want to know about track access\n> times, latency, transfer rate, etc. and RPM is just one influencing factor\n> for the above).\n>\n> The quality of your RAID cards will also be important (how fast do they\n> perform their calculations, how much buffer do they have) as will the\n> overall specs of you system. If you have a bottleneck somewhere other than\n> your raw disk I/O then you can throw all the money you want at faster\n> drives and see no improvement.\n>\n> Cheers,\n> Steve\n>\n> On Thursday 21 November 2002 8:45 am, you wrote:\n> > Hi folks,\n> >\n> > I have two options:\n> > 3*18 GB 10,000 RPM Ultra160 Dual Channel SCSI controller + H/W Raid 5\n> > and\n> > 2*36 GB 15,000 RPM Ultra320 Dual Channel SCSI and no RAID\n> >\n> > Does anyone opinions *performance wise* the pros and cons of above\n> > two options.\n> >\n> > please take in consideration in latter case its higher RPM and better\n> > SCSI interface.\n> >\n> >\n> >\n> > Regds\n> > Mallah.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Fri, 22 Nov 2002 00:38:43 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "> A mirrored 2x36 setup will probably yield a marginal hit on writes (vs a\n> single disk) and an improvement on reads due to having two drives to read\n> from and will (based on the Scientific Wild Ass Guess method and knowing\n\nslightly offtopic:\n\nDoes anyone one if linux software raid 1 supports this method (reading from\nboth disks, thus doubling performance)?\n\nRegards,\nBjoern\n\n", "msg_date": "Thu, 21 Nov 2002 20:24:19 +0100", "msg_from": "\"Bjoern Metzdorf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "\n> Does anyone one if linux software raid 1 supports this method (reading from\n> both disks, thus doubling performance)?\n> \n\n From memory of reading slightly old (1999) howtos, I believe that the answer is yes, at least for the md system. Not sure about LVM, or even if mirroring is supported under LVM.\n\nI would guess that it shouldn't be too hard to test:\n\n1) set up dataset on mirred system.\n2) run pg_bench or one of the tpc benches.\n3) fail one of the drives in the mirror.\n4) run the test again. \n\nIf the read latency goes down, it should be reflected in the benchmark. \n\neric\n\n\n\n", "msg_date": "Thu, 21 Nov 2002 11:30:44 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "On Fri, 22 Nov 2002, Rajesh Kumar Mallah. wrote:\n\n> \n> \n> Thanks Steve,\n> \n> recently i have come to know that i can only get 3*18 GB ultra160 10K\n> hraddrives,\n> \n> my OS is lunux , other parameters are\n> RAM:2GB , CPU:2*2Ghz Xeon,\n> \n> i feel i will do away with raid use one disk for the OS \n> and pg_dumps\n> \n> , one for tables and last one for WAL , does this sound good?\n\nThat depends. Are you going to be mostly reading, mostly updating, or an \neven mix of both?\n\nIf you are going to be 95% reading, then don't bother moving WAL to \nanother drive, install the OS on the first 2 or 3 gigs of each drive, then \nmake a RAID5 out of what's left over and put everything on that. \n\nIf you're going to be mostly updating, then yes, your setup is a pretty \ngood choice. \n\nIf it will be mostly mixed, look at using a software RAID1.\n\nMore important will be tuning your database once it's up, i.e. increasing \nshared buffers, setting random page costs to reflect what percentage of \nyour dataset is likely to be cached (the closer you come to caching your \nwhole dataset, the closer random page cost approaches 1)\n\n\n\n", "msg_date": "Thu, 21 Nov 2002 12:39:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "Bjoern,\n\nYou may find that hoping for a doubling of performance by using RAID 1\nis a little on the optimistic side.\n\nExcept on very long sequential reads, media transfer rates are unlikely\nto be the limiting factor in disk throughput. Seek and rotational\nlatencies are the cost factor in random I/O, and with RAID 1, the\nperformance gain comes from reducing the mean latency -- on a single\nrequest, one disk will be closer to the data than the other. If the\nsoftware that's handling the RAID 1 will schedule concurrent requests,\nyou lose the advantage of reducing mean latency in this fashion, but you\ncan get some improvement in throughput by overlapping some latency\nperiods.\n\nWhile not wanting to argue against intelligent I/O design, memory is\ncheap these days, and usually gives big bang-for-buck in improving\nresponse times.\n\nAs to the specifics of how one level or another of Linux implements RAID\n1, I'm afraid I can't shed much light at the moment.\n\nRegards,\n\nMike\nOn Fri, 2002-11-22 at 06:24, Bjoern Metzdorf wrote:\n\n> > A mirrored 2x36 setup will probably yield a marginal hit on writes (vs a\n> > single disk) and an improvement on reads due to having two drives to read\n> > from and will (based on the Scientific Wild Ass Guess method and knowing\n> \n> slightly offtopic:\n> \n> Does anyone one if linux software raid 1 supports this method (reading from\n> both disks, thus doubling performance)?\n> \n> Regards,\n> Bjoern\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nMichael Nielsen\n\nph: 0411-097-023 email: [email protected]\n\n\nMike Nielsen\n\n________________________________________________________________________\n\n\n\n\n\n\n\nBjoern,\n\nYou may find that hoping for a doubling of performance by using RAID 1 is a little on the optimistic side.\n\nExcept on very long sequential reads, media transfer rates are unlikely to be the limiting factor in disk throughput.  Seek and rotational latencies are the cost factor in random I/O, and with RAID 1, the performance gain comes from reducing the mean latency --  on a single request, one disk will be closer to the data than the other.  If the software that's handling the RAID 1 will schedule concurrent requests, you lose the advantage of reducing mean latency in this fashion, but you can get some improvement in throughput by overlapping some latency periods.\n\nWhile not wanting to argue against intelligent I/O design, memory is cheap these days, and usually gives big bang-for-buck in improving response times.\n\nAs to the specifics of how one level or another of Linux implements RAID 1, I'm afraid I can't shed much light at the moment.\n\nRegards,\n\nMike\nOn Fri, 2002-11-22 at 06:24, Bjoern Metzdorf wrote:\n\n> A mirrored 2x36 setup will probably yield a marginal hit on writes (vs a\n> single disk) and an improvement on reads due to having two drives to read\n> from and will (based on the Scientific Wild Ass Guess method and knowing\n\nslightly offtopic:\n\nDoes anyone one if linux software raid 1 supports this method (reading from\nboth disks, thus doubling performance)?\n\nRegards,\nBjoern\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n\nMichael Nielsen\n\nph: 0411-097-023 email: [email protected]\n\n\nMike Nielsen", "msg_date": "22 Nov 2002 07:03:46 +1100", "msg_from": "Mike Nielsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "On Thu, 21 Nov 2002, Bjoern Metzdorf wrote:\n\n> > A mirrored 2x36 setup will probably yield a marginal hit on writes (vs a\n> > single disk) and an improvement on reads due to having two drives to read\n> > from and will (based on the Scientific Wild Ass Guess method and knowing\n> \n> slightly offtopic:\n> \n> Does anyone one if linux software raid 1 supports this method (reading from\n> both disks, thus doubling performance)?\n\nYes, it does. Generally speaking, it increases raw throughput by a factor \nof 2 if you're grabbing enough data to justify reading it from both \ndrives. But for most database apps, you don't read enough at a time to \nget a gain from this. I.e. if your stripe size is 8k and you're reading \n1k at a time, no gain.\n\nHowever, under parallel load, the extra drives really help.\n\nIn fact, the linux kernel supports >2 drives in a mirror. Useful for a \nmostly read database that needs to handle lots of concurrent users.\n\n", "msg_date": "Thu, 21 Nov 2002 13:17:05 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "> In fact, the linux kernel supports >2 drives in a mirror. Useful for a \n> mostly read database that needs to handle lots of concurrent users.\n\nGood to know.\n\nWhat do you think is faster: 3 drives in raid 1 or 3 drives in raid 5?\n\nRegards,\nBjoern\n\n", "msg_date": "Thu, 21 Nov 2002 21:53:02 +0100", "msg_from": "\"Bjoern Metzdorf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "Bjoern,\n\n> Good to know.\n> \n> What do you think is faster: 3 drives in raid 1 or 3 drives in raid\n> 5?\n\nMy experience? Raid 1. But that depends on other factors as well;\nyour controller (software controllers use system RAM and thus lower\nperformance), what kind of reads you're getting and how often. IMHO,\nRAID 5 is faster for sequential reads (lareg numbers of records on\nclustered tables), RAID 1 for random reads.\n\nAnd keep in mind: RAID 5 is *bad* for data writes. In my experience,\ndatabase data-write performance on RAID 5 UW SCSI is as slow as IDE\ndrives, particulary for updating large numbers of records, *unless* the\nupdated records are sequentially updated and clustered.\n\nBut in a multi-user write-often setup, RAID 5 will slow you down and\nRAID 1 is better.\n\nDid that help?\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 13:20:56 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no" }, { "msg_contents": "Bjoern,\n\n> Good to know.\n> \n> What do you think is faster: 3 drives in raid 1 or 3 drives in raid\n> 5?\n\nMy experience? Raid 1. But that depends on other factors as well;\nyour controller (software controllers use system RAM and thus lower\nperformance), what kind of reads you're getting and how often. IMHO,\nRAID 5 is faster for sequential reads (lareg numbers of records on\nclustered indexes), RAID 1 for random reads.\n\nAnd keep in mind: RAID 5 is *bad* for data writes. In my experience,\ndatabase data-write performance on RAID 5 UW SCSI is as slow as IDE\ndrives, particulary for updating large numbers of records, *unless* the\nupdated records are sequentially updated and clustered.\n\nBut in a multi-user write-often setup, RAID 5 will slow you down and\nRAID 1 is better.\n\nDid that help?\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 13:21:16 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no" }, { "msg_contents": "On Thu, 21 Nov 2002, Bjoern Metzdorf wrote:\n\n> > In fact, the linux kernel supports >2 drives in a mirror. Useful for a \n> > mostly read database that needs to handle lots of concurrent users.\n> \n> Good to know.\n> \n> What do you think is faster: 3 drives in raid 1 or 3 drives in raid 5?\n\nGenerally RAID 5. RAID 1 is only faster if you are doing a lot of \nparellel reads. I.e. you have something like 10 agents reading at the \nsame time. RAID 5 also works better under parallel load than a single \ndrive.\n\nThe fastest of course, is multidrive RAID0. But there's no redundancy.\n\nOddly, my testing doesn't show any appreciable performance increase in \nlinux by layering RAID5 or 1 over RAID0 or vice versa, something that \nis usually faster under most setups.\n\n", "msg_date": "Thu, 21 Nov 2002 14:24:00 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "> Generally RAID 5. RAID 1 is only faster if you are doing a lot of\n> parellel reads. I.e. you have something like 10 agents reading at the\n> same time. RAID 5 also works better under parallel load than a single\n> drive.\n\nyep, but write performance sucks.\n\n> The fastest of course, is multidrive RAID0. But there's no redundancy.\n\nWith 4 drives I'd always go for raid 10, fast and secure\n\n> Oddly, my testing doesn't show any appreciable performance increase in\n> linux by layering RAID5 or 1 over RAID0 or vice versa, something that\n> is usually faster under most setups.\n\nIs this with linux software raid? raid10 is not significantly faster? cant\nbelieve that...\n\nRegards,\nBjoern\n\n", "msg_date": "Thu, 21 Nov 2002 22:57:59 +0100", "msg_from": "\"Bjoern Metzdorf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "On Thu, 21 Nov 2002, Bjoern Metzdorf wrote:\n\n> > Generally RAID 5. RAID 1 is only faster if you are doing a lot of\n> > parellel reads. I.e. you have something like 10 agents reading at the\n> > same time. RAID 5 also works better under parallel load than a single\n> > drive.\n> \n> yep, but write performance sucks.\n\nWell, it's not all that bad. After all, you only have to read the parity \nstripe and data stripe (two reads) update the data stripe, xor the new \ndata stripe against the old parity stripe, and write both. In RAID 1 you \nhave to read the old data stripe, update it, and then write it to two \ndrives. So, generally speaking, it's not that much more work on RAID 5 \nthan 1. My experience has been that RAID5 is only about 10 to 20% percent \nslower than RAID1 in writing, if that.\n\n> > The fastest of course, is multidrive RAID0. But there's no redundancy.\n> \n> With 4 drives I'd always go for raid 10, fast and secure\n> \n> > Oddly, my testing doesn't show any appreciable performance increase in\n> > linux by layering RAID5 or 1 over RAID0 or vice versa, something that\n> > is usually faster under most setups.\n> \n> Is this with linux software raid? raid10 is not significantly faster? cant\n> believe that...\n\nYep, Linux software raid. It seems like it doesn't parallelize well. \nThat's with several different setups. I've tested it on a machine a dual \nUltra 40/80 controller and 6 Ultra wide 10krpm SCSI drives, and no matter \nhow I arrange the drives, 50, 10, 01, 05, the old 1 or 5 setups are just \nabout as fast.\n\n", "msg_date": "Thu, 21 Nov 2002 15:37:47 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "Am Donnerstag, 21. November 2002 21:53 schrieb Bjoern Metzdorf:\n> > In fact, the linux kernel supports >2 drives in a mirror. Useful for a\n> > mostly read database that needs to handle lots of concurrent users.\n>\n> Good to know.\n>\n> What do you think is faster: 3 drives in raid 1 or 3 drives in raid 5?\n>\n> Regards,\n> Bjoern\n>\n\nIf 4 drives are an option, I suggest 2 x RAID1, one for data, and one for WAL and temporary DB space (pg_temp).\n\nRegards,\n\tMario Weilguni\n\n\n", "msg_date": "Fri, 22 Nov 2002 08:31:11 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> If 4 drives are an option, I suggest 2 x RAID1, one for data, and one for WAL and temporary DB space (pg_temp).\n\nIdeally there should be *nothing* on the WAL drive except WAL; you don't\never want that disk head seeking away from the WAL. Put the temp files\non the data disk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Nov 2002 08:52:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on " }, { "msg_contents": "[email protected] wrote:\n> Objet : Re: [PERFORM] [ADMIN] H/W RAID 5 on slower disks versus no\n> raid on\n> \n> \n> Mario Weilguni <[email protected]> writes:\n>> If 4 drives are an option, I suggest 2 x RAID1, one for data, and\n>> one for WAL and temporary DB space (pg_temp). \n> \n> Ideally there should be *nothing* on the WAL drive except WAL; you\n> don't ever want that disk head seeking away from the WAL. Put the\n> temp files on the data disk.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 4: Don't 'kill -9' the\n> postmaster \n\nwhich temp files ?\n", "msg_date": "Fri, 22 Nov 2002 15:17:26 +0100", "msg_from": "\"philip johnson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on " }, { "msg_contents": "On Fri, Nov 22, 2002 at 08:52:48AM -0500, Tom Lane wrote:\n> Mario Weilguni <[email protected]> writes:\n> > If 4 drives are an option, I suggest 2 x RAID1, one for data, and one for WAL and temporary DB space (pg_temp).\n> \n> Ideally there should be *nothing* on the WAL drive except WAL; you don't\n> ever want that disk head seeking away from the WAL. Put the temp files\n> on the data disk.\n\nUnless the interface and disks are so fast that it makes no\ndifference.\n\nTry as I might, I can't make WAL go any faster on its own controller\nand disks than if I leave it on the same filesystem as everything\nelse, on our production arrays. We use Sun A5200s, and I have tried\nit set up with the WAL on separate disks on the box, and on separate\ndisks in the array, and even on separate disks on a separate\ncontroller in the array (I've never tried it with two arrays, but I\ndon't have infinite money, either). I have never managed to\ndemonstrate a throughput difference outside the margin of error of my\ntests. One arrangement -- putting the WAL on a separate pair of UFS\ndisks using RAID 1, but not on the fibre channel -- was demonstrably\nslower than leaving the WAL in the data area.\n\nNothing is proved by this, of course, except that if you're going to\nuse high-performance hardware, you have to tune and test over and\nover again. I was truly surprised that a separate pair of VxFS\nRAID-1 disks in the array were no faster, but I guess it makes sense:\nthe array is just as busy in either case, and the disks are really\nfast. I still don't really believe it, though.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 22 Nov 2002 10:01:52 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] H/W RAID 5 on slower disks versus no raid on" }, { "msg_contents": "On Mon, 2002-11-25 at 23:03, David Gilbert wrote:\n>\n> I'm on a bit of a mission to stamp out this misconception. In my\n> testing, all but the most expensive hardware raid controllers are\n> actually slower than FreeBSD's software RAID. I've done my tests with\n> a variety of controllers with the same data load and the same disks.\n>\n\nI agree 100%: hardware raid sucks.\nWe had a raid 5 Postgres server on midgrade hardware with 5 60gig 7200rpm\nIDE disks (240 gig total) and the thouroughput was just as high (maybe 10%\nless) than a single disk. Same for the seek times. CPU around 1Ghz never\nhit more than 10% for the raid service. Since very few databases are CPU\nlimited, this is a small price to pay.\n\nWe confirmed the performance results with heavy testing. There is virtually\nno disadvatage to software raid, just spend 10$ and get a 10% faster cpu.\n\nThe linux software raid drivers (and others I assume) are very optimized.\nNot to sure about m$ but win2k comes with raid services, its pretty\nreasonalbe to believe they work ok.\n\nYou can double the performance of a raid system by going 0+x or x+0 (eg 05\nor 50). just by adding drives. This really doubles it, and an optmized\nsoftware driver improves the seek times too by placing the idle heads it\ndifferent places on the disks.\n\np.s. scsi is a huge waste of money and is no faster than IDE. IMHO, scsi's\nonly advantage is longer cables. Both interfaces will soon be obsolete with\nthe coming serial ATA. High rpm disks are very expensive and add latent\nheat to your system. Western digitial's IDE disks outperform even top end\nscsi disks at a fraction of a cost. You can install a 4 drive 10 raid setup\nfor the cost of a single 15k rpm scsi disk that will absolutely blow it away\nin terms of performance (reliability too).\n\nJust remember, don't add more than one IDE disk on a raid system to a single\nIDE channel! Also, do not attempt to buy IDE cables longer than 18\"..they\nwill not be reliable.\n\nMerlin\n\n\n", "msg_date": "Wed, 27 Nov 2002 13:16:35 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster HDDs" }, { "msg_contents": "On Wed, Nov 27, 2002 at 01:16:35PM -0500, Merlin Moncure wrote:\n> I agree 100%: hardware raid sucks.\n\nI've been mostly ignoring this thread, but I'm going to jump in at\nthis point. \n\n> We confirmed the performance results with heavy testing. There is virtually\n> no disadvatage to software raid, just spend 10$ and get a 10% faster cpu.\n\nDefine heavy testing. \n\nI can do sequential selects on a low end PC with one client and have it \nperform as well as an E10K. I could also fire up 600 clients doing \nseemingly random queries and updates and reduce the same low end PC to \na smoldering pile of rubble. \n\nIt'd be easy to fudge the results of the \"heavy testing\" to match what I \nwanted to believe.\n\n> The linux software raid drivers (and others I assume) are very optimized.\n\nAs are the Solaris drivers, and many others. But there is more to a\nRAID array than drivers. There's the stability of the controller\nchipsets and the latency involved in getting data to and from the\ndevices.\n\nWould you feel comfortable if you knew the state data for the aircraft\nyou're travelling on was stored on IDE software RAID?\n\nPart of the point of hardware raid is that it does do a small set\nof operations, and therefore far easier to specify and validate the\ncorrect operation of the software and hardware.\n\n> p.s. scsi is a huge waste of money and is no faster than IDE. IMHO, scsi's\n> only advantage is longer cables. Both interfaces will soon be obsolete with\n> the coming serial ATA.\n\nDon't get me wrong, I'm a huge fan of IDE RAID in the right locations,\nbut comments like this reflect a total lack of understanding what\nmakes SCSI a better protocol over IDE.\n\nDisconnected operation is one _HUGE_ benefit of SCSI, simply being the\nability for the CPU and controller to send a command, and then both head \noff to do another task while waiting for data to be returned from the \ndevice. Most (that is most, not all) IDE controllers are incapable of \nthis. Another is command reordering (which I believe SATA does have),\nbeing the reordering of requests to better utilise each head sweep.\n\nThis feature comes into play far more obviously when you have many\nclients performing operations across a large dataset where the\nelements have no immediate relationship to each other.\n\nIt is amplified when your database of such a size, and used in a way\nthat you have multiple controllers with multiple spools.\n\nSCSI is not about speed to and from the device, although this does end\nup being a side effect of the design. It's about latency, and removal of \ncontention from the shared bus. \n\nUltra/320 devices in reality are no faster than Ultra/160 devices.\nWhat is faster, is that you can now have 4 devices instead of 2 on the\nsame bus, with lower request latency and no reduction in\nthroughput performance.\n\nSCSI also allows some more advanced features too. Remote storage \nover fibre, iSCSI, shared spools just to name a few.\n\n> High rpm disks are very expensive and add latent heat to your system. \n\nIf you have a real justification for SCSI in your database server, you\nprobably do have both the cooling and the budget to accomodate it.\n\n> Western digitial's IDE disks outperform even top end scsi disks at a \n> fraction of a cost. \n\nComplete and utter rubbish. That's like saying your 1.1 litre small\ncity commuter hatch can outperform a 600hp Mack truck.\n\nYes, in the general case it's quite probable. Once you start\nshuffling real loads IDE will grind the machine to a halt. Real\ndatabase iron does not use normal IDE.\n\n> You can install a 4 drive 10 raid setup for the cost of a single 15k \n> rpm scsi disk that will absolutely blow it away in terms of performance \n\nSee above. Raw disk speed does not equal performance. Database spool\nperformance is a combination of a large number of factors, one being\nseek time, and another being bus contention.\n\n> (reliability too).\n\nNow you're smoking crack. Having run some rather large server farms\nfor some very large companies, I can tell you with both anecdotal, and\nrecorded historical evidence that the failure rate for IDE was at\nleast double, if not four times that of the SCSI hardware.\n\nAnd the IDE hardware was under much lower loads than the SCSI\nhardware.\n\n> Just remember, don't add more than one IDE disk on a raid system to a single\n> IDE channel! Also, do not attempt to buy IDE cables longer than 18\"..they\n> will not be reliable.\n\nSo now you're pointing out that you share PCI bus interrupts over a large \nnumber of devices, introducing another layer of contention and that \nyou'll have to cable your 20 spool machine with 20 cables each no longer \nthan 45cm. Throw in some very high transaction rates, and a large \ndata set that won't fit in your many GB of ram.\n\nI believe the game show sound effect would be similar to \"Bzzzt\".\n\nIDE for the general case is acceptable. SCSI is for everything else.\n\n-- \nDavid Jericho\nSenior Systems Administrator, Bytecomm Pty Ltd\n\n\n--\nScanned and found clear of viruses by ENTIREScan. http://www.entirescan.com/\n", "msg_date": "Fri, 29 Nov 2002 12:58:31 +1000", "msg_from": "David Jericho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: H/W RAID 5 on slower disks versus no raid on faster HDDs" } ]
[ { "msg_contents": "\nhello all,\n\nhow often should \"vacuum full\" usually be run ?\n\nthanks,\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n", "msg_date": "Thu, 21 Nov 2002 17:52:05 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum full" }, { "msg_contents": "On Thu, 21 Nov 2002, Henrik Steffen wrote:\n\n> \n> hello all,\n> \n> how often should \"vacuum full\" usually be run ?\n\nI recommend nightly. Also, check index size. If they are growing, you \nmay want to reindex each night or week as well.\n\n", "msg_date": "Thu, 21 Nov 2002 10:32:44 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full" }, { "msg_contents": "\nok, but how can I measure index sizes?\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, November 21, 2002 6:32 PM\nSubject: Re: [PERFORM] vacuum full\n\n\n> On Thu, 21 Nov 2002, Henrik Steffen wrote:\n>\n> >\n> > hello all,\n> >\n> > how often should \"vacuum full\" usually be run ?\n>\n> I recommend nightly. Also, check index size. If they are growing, you\n> may want to reindex each night or week as well.\n>\n\n", "msg_date": "Thu, 21 Nov 2002 19:17:46 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum full" }, { "msg_contents": "Hello--\n\nAttached is a file containing two SQL queries. The first take\nprohibitively long to complete because, according to EXPLAIN, it ignore\ntwo very important indexes. The second SQL query seems almost identical\nto the first but runs very fast because, according to EXPLAIN, it does\nuses all the indexes appropriately. \n\nCan someone please explain to me what the difference is here? Or if\nthere is something I can do with my indexes to make the first query run\nlike the second?\n\n\nThanks Much\n\nPeter", "msg_date": "21 Nov 2002 10:18:53 -0800", "msg_from": "\"Peter T. Brown\" <[email protected]>", "msg_from_op": false, "msg_subject": "stange optimizer results" }, { "msg_contents": "sorry, didn't notice your message posted to pgsql-general...\n\nbut is there any method to see the size in bytes a particular index\nfor a particular table takes?\n\n--\n\nMit freundlichem Gru�\n\nHenrik Steffen\nGesch�ftsf�hrer\n\ntop concepts Internetmarketing GmbH\nAm Steinkamp 7 - D-21684 Stade - Germany\n--------------------------------------------------------\nhttp://www.topconcepts.com Tel. +49 4141 991230\nmail: [email protected] Fax. +49 4141 991233\n--------------------------------------------------------\n24h-Support Hotline: +49 1908 34697 (EUR 1.86/Min,topc)\n--------------------------------------------------------\nIhr SMS-Gateway: JETZT NEU unter: http://sms.city-map.de\nSystem-Partner gesucht: http://www.franchise.city-map.de\n--------------------------------------------------------\nHandelsregister: AG Stade HRB 5811 - UstId: DE 213645563\n--------------------------------------------------------\n\n----- Original Message -----\nFrom: \"scott.marlowe\" <[email protected]>\nTo: \"Henrik Steffen\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, November 21, 2002 6:32 PM\nSubject: Re: [PERFORM] vacuum full\n\n\n> On Thu, 21 Nov 2002, Henrik Steffen wrote:\n>\n> >\n> > hello all,\n> >\n> > how often should \"vacuum full\" usually be run ?\n>\n> I recommend nightly. Also, check index size. If they are growing, you\n> may want to reindex each night or week as well.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Thu, 21 Nov 2002 19:20:16 +0100", "msg_from": "\"Henrik Steffen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuum full" }, { "msg_contents": "On 21 Nov 2002, Peter T. Brown wrote:\n\n> Hello--\n>\n> Attached is a file containing two SQL queries. The first take\n> prohibitively long to complete because, according to EXPLAIN, it ignore\n> two very important indexes. The second SQL query seems almost identical\n> to the first but runs very fast because, according to EXPLAIN, it does\n> uses all the indexes appropriately.\n>\n> Can someone please explain to me what the difference is here? Or if\n> there is something I can do with my indexes to make the first query run\n> like the second?\n\nIt doesn't take into account that in general a=b, b=constant implies\na=constant.\n\n Perhaps if you used explicit join syntax for visitor joining\nvisitorextra it might help. Like doing:\n FROM visitor inner join visitorextra on (...)\n left outer join ...\n\n", "msg_date": "Thu, 21 Nov 2002 10:34:14 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stange optimizer results" }, { "msg_contents": "\nOn Thu, 21 Nov 2002, Stephan Szabo wrote:\n\n> On 21 Nov 2002, Peter T. Brown wrote:\n>\n> > Hello--\n> >\n> > Attached is a file containing two SQL queries. The first take\n> > prohibitively long to complete because, according to EXPLAIN, it ignore\n> > two very important indexes. The second SQL query seems almost identical\n> > to the first but runs very fast because, according to EXPLAIN, it does\n> > uses all the indexes appropriately.\n> >\n> > Can someone please explain to me what the difference is here? Or if\n> > there is something I can do with my indexes to make the first query run\n> > like the second?\n>\n> It doesn't take into account that in general a=b, b=constant implies\n> a=constant.\n>\n> Perhaps if you used explicit join syntax for visitor joining\n> visitorextra it might help. Like doing:\n> FROM visitor inner join visitorextra on (...)\n> left outer join ...\n\nSent this too quickly. It probably won't make it use an index on\nvistorextra, but it may lower the number of expected rows that it's going\nto be left outer joining so that a nested loop and index scan makes sense.\n\n", "msg_date": "Thu, 21 Nov 2002 10:35:37 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stange optimizer results" }, { "msg_contents": "On Thu, 21 Nov 2002 19:20:16 +0100\n\"Henrik Steffen\" <[email protected]> wrote:\n\n> sorry, didn't notice your message posted to pgsql-general...\n> \n> but is there any method to see the size in bytes a particular index\n> for a particular table takes?\n\nFor a table foo, do:\npsql mydatabase\n-- first find the index name:\nmydatabase=> \\d foo\n...\nIndexes: foo_pkey unique btree (mykey)\n-- now find the unix file name of the index:\nmydatabase=> select relfilenode from pg_class where relname='foo_pkey';\n relfilenode \n-------------\n 18122\n-- Thus the file name of the index is \"18122\".\n\\q\n# now go and look for the file:\nunixprompt> su postgres\nPassword:\npostgres> cd /var/lib/pgsql/data/base/????\npostgres> ls -l 18122\n-rw------- 1 postgres daemon 7471104 Nov 21 12:52 18122\n\nThus the index for table foo is 7.4 MBytes in size. What I left\nout is the ???? directory name above. I find it by educated guess.\n\nDoes someone know the right way to map from database name to \ndata directory name?\n\n-- George\n\n> From: \"scott.marlowe\" <[email protected]>\n> To: \"Henrik Steffen\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Thursday, November 21, 2002 6:32 PM\n> Subject: Re: [PERFORM] vacuum full\n> \n> \n> > On Thu, 21 Nov 2002, Henrik Steffen wrote:\n> > >\n> > > how often should \"vacuum full\" usually be run ?\n> >\n> > I recommend nightly. Also, check index size. If they are growing, you\n> > may want to reindex each night or week as well.\n\n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n", "msg_date": "Thu, 21 Nov 2002 13:50:36 -0500", "msg_from": "george young <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full" }, { "msg_contents": "On Thu, 21 Nov 2002, Henrik Steffen wrote:\n\n> sorry, didn't notice your message posted to pgsql-general...\n> \n> but is there any method to see the size in bytes a particular index\n> for a particular table takes?\n\nThere are some sql queries that can tell you the number of blocks used and \nall, but I generally do it with oid2name (you can get it installed by \ngoing into your source tree/contrib/oid2name and doing a make/make install \nthere.) \n\noid2name by itself will tell you the oids of your databases. On my fairly \nfresh system it looks like this:\n\nAll databases:\n---------------------------------\n16976 = postgres\n1 = template1\n16975 = template0\n\nThen, \n\n'oid2name -d postgres' outputs this:\n\n16999 = g\n17025 = g_name_dx\n16977 = gaff\n16988 = test\n16986 = test_id_seq\n17019 = tester\n\nSo, I can do this 'ls -l $PGDATA/base/16976/17025'\n\nto see how big the index g_name_dx is.\n\n", "msg_date": "Thu, 21 Nov 2002 12:12:35 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full" }, { "msg_contents": "george young <[email protected]> writes:\n> Does someone know the right way to map from database name to \n> data directory name?\n\npg_database.oid column.\n\nHowever, that's definitely the hard way. I'd just look at the relpages\ncolumn of pg_class, which should be reasonably accurate if you've done\na VACUUM or ANALYZE recently. For example:\n\nregression=# select relname, relkind, relpages from pg_class where\nregression-# relname like 'tenk1%';\n relname | relkind | relpages\n---------------+---------+----------\n tenk1 | r | 358\n tenk1_hundred | i | 30\n tenk1_unique1 | i | 30\n tenk1_unique2 | i | 30\n(4 rows)\n\nHere we have a table and its three indexes, and the index sizes look\nreasonable. If the index sizes approach or exceed the table size,\nyou are probably suffering from index bloat --- try a reindex.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Nov 2002 14:21:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuum full " }, { "msg_contents": "On 21 Nov 2002, Peter T. Brown wrote:\n\n> trouble is that this SQL is being automatically created by my\n> object-relational mapping software and I can't easily customize it. Is\n> there any other way I can force Postgres to do the most efficient thing?\n\nGiven no changes to the query, probably only by using blunt hammers like\nseeing if set enable_seqscan=off or set enable_mergejoin=off helps, but\nyou'd want to wrap the statement with them (setting them globally is\nbad) and you might not be able to do that as well if your software won't\nlet you.\n\n", "msg_date": "Thu, 21 Nov 2002 13:26:44 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stange optimizer results" }, { "msg_contents": "trouble is that this SQL is being automatically created by my\nobject-relational mapping software and I can't easily customize it. Is\nthere any other way I can force Postgres to do the most efficient thing?\n\n\n\nOn Thu, 2002-11-21 at 10:35, Stephan Szabo wrote:\n> \n> On Thu, 21 Nov 2002, Stephan Szabo wrote:\n> \n> > On 21 Nov 2002, Peter T. Brown wrote:\n> >\n> > > Hello--\n> > >\n> > > Attached is a file containing two SQL queries. The first take\n> > > prohibitively long to complete because, according to EXPLAIN, it ignore\n> > > two very important indexes. The second SQL query seems almost identical\n> > > to the first but runs very fast because, according to EXPLAIN, it does\n> > > uses all the indexes appropriately.\n> > >\n> > > Can someone please explain to me what the difference is here? Or if\n> > > there is something I can do with my indexes to make the first query run\n> > > like the second?\n> >\n> > It doesn't take into account that in general a=b, b=constant implies\n> > a=constant.\n> >\n> > Perhaps if you used explicit join syntax for visitor joining\n> > visitorextra it might help. Like doing:\n> > FROM visitor inner join visitorextra on (...)\n> > left outer join ...\n> \n> Sent this too quickly. It probably won't make it use an index on\n> vistorextra, but it may lower the number of expected rows that it's going\n> to be left outer joining so that a nested loop and index scan makes sense.\n> \n> \n-- \n\nPeter T. Brown\nDirector Of Technology\nMemetic Systems, Inc.\n\"Translating Customer Data Into Marketing Action.\"\n206.335.2927\nhttp://www.memeticsystems.com/\n\n", "msg_date": "21 Nov 2002 13:34:12 -0800", "msg_from": "\"Peter T. Brown\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: stange optimizer results" } ]
[ { "msg_contents": "There had been a great deal of discussion of how to improve the\nperformance of select/sorting on this list, what about\ninsert/delete/update?\n\nIs there any rules of thumb we need to follow? What are the parameters\nwe should tweak to whip the horse to go faster?\n\nThanks\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "21 Nov 2002 15:54:03 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": true, "msg_subject": "performance of insert/delete/update" }, { "msg_contents": "Wei,\n\n> There had been a great deal of discussion of how to improve the\n> performance of select/sorting on this list, what about\n> insert/delete/update?\n> \n> Is there any rules of thumb we need to follow? What are the\n> parameters\n> we should tweak to whip the horse to go faster?\n\nyes, lots of rules. Wanna be more specific? You wondering about\nquery structure, hardware, memory config, what?\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 13:23:57 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On 21 Nov 2002, Wei Weng wrote:\n\n> On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:\n> > Wei,\n> > \n> > > There had been a great deal of discussion of how to improve the\n> > > performance of select/sorting on this list, what about\n> > > insert/delete/update?\n> > > \n> > > Is there any rules of thumb we need to follow? What are the\n> > > parameters\n> > > we should tweak to whip the horse to go faster?\n> > \n> > yes, lots of rules. Wanna be more specific? You wondering about\n> > query structure, hardware, memory config, what?\n> I am most concerned about the software side, that is query structures\n> and postgresql config.\n\nThe absolutely most important thing to do to speed up inserts and updates \nis to squeeze as many as you can into one transaction. Within reason, of \ncourse. There's no great gain in putting more than a few thousand \ntogether at a time. If your application is only doing one or two updates \nin a transaction, it's going to be slower in terms of records written per \nsecond than an application that is updating 100 rows in a transaction.\n\nReducing triggers and foreign keys on the inserted tables to a minimum \nhelps.\n\nInserting into temporary holding tables and then having a regular process \nthat migrates the data into the main tables is sometimes necessary if \nyou're putting a lot of smaller inserts into a very large dataset. \nThen using a unioned view to show the two tables as one.\n\nPutting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n\nPutting indexes that have to be updated during inserts onto their own \ndrive(s).\n\nPerforming regular vacuums on heavily updated tables.\n\nAlso, if your hardware is reliable, you can turn off fsync in \npostgresql.conf. That can increase performance by anywhere from 2 to 10 \ntimes, depending on your application.\n\n\n", "msg_date": "Thu, 21 Nov 2002 14:49:18 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 2002-11-21 at 16:23, Josh Berkus wrote:\n> Wei,\n> \n> > There had been a great deal of discussion of how to improve the\n> > performance of select/sorting on this list, what about\n> > insert/delete/update?\n> > \n> > Is there any rules of thumb we need to follow? What are the\n> > parameters\n> > we should tweak to whip the horse to go faster?\n> \n> yes, lots of rules. Wanna be more specific? You wondering about\n> query structure, hardware, memory config, what?\nI am most concerned about the software side, that is query structures\nand postgresql config.\n\nThanks\n\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "21 Nov 2002 17:23:57 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> The absolutely most important thing to do to speed up inserts and\n> updates \n> is to squeeze as many as you can into one transaction. Within\n> reason, of \n> course. There's no great gain in putting more than a few thousand \n> together at a time. If your application is only doing one or two\n> updates \n> in a transaction, it's going to be slower in terms of records written\n> per \n> second than an application that is updating 100 rows in a\n> transaction.\n\nThis only works up to the limit of the memory you have available for\nPostgres. If the updates in one transaction exceed your available\nmemory, you'll see a lot of swaps to disk log that will slow things\ndown by a factor of 10-50 times.\n\n> Reducing triggers and foreign keys on the inserted tables to a\n> minimum \n> helps.\n\n... provided that this will not jeapordize your data integrity. If you\nhave indispensable triggers in PL/pgSQL, re-qriting them in C will make\nthem, and thus updates on their tables, faster.\n\nAlso, for foriegn keys, it speeds up inserts and updates on parent\ntables with many child records if the foriegn key column in the child\ntable is indexed.\n\n> Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n> \n> Putting indexes that have to be updated during inserts onto their own\n> \n> drive(s).\n> \n> Performing regular vacuums on heavily updated tables.\n> \n> Also, if your hardware is reliable, you can turn off fsync in \n> postgresql.conf. That can increase performance by anywhere from 2 to\n> 10 \n> times, depending on your application.\n\nIt can be dangerous though ... in the event of a power outage, for\nexample, your database could be corrupted and difficult to recover. So\n... \"at your own risk\".\n\nI've found that switching from fsync to fdatasync on Linux yields\nmarginal performance gain ... about 10-20%.\n\nAlso, if you are doing large updates (many records at once) you may\nwant to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to\nallow for large transactions.\n\nFinally, you want to structure your queries so that you do the minimum\nnumber of update writes possible, or insert writes. For example, a\nprocedure that inserts a row, does some calculations, and then modifies\nseveral fields in that row is going to slow stuff down significantly\ncompared to doing the calculations as variables and only a single\ninsert. Certainly don't hit a table with 8 updates, each updating one\nfield instead of a single update statement.\n\n-Josh Berkus\n", "msg_date": "Thu, 21 Nov 2002 14:26:40 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 21 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > The absolutely most important thing to do to speed up inserts and\n> > updates \n> > is to squeeze as many as you can into one transaction. Within\n> > reason, of \n> > course. There's no great gain in putting more than a few thousand \n> > together at a time. If your application is only doing one or two\n> > updates \n> > in a transaction, it's going to be slower in terms of records written\n> > per \n> > second than an application that is updating 100 rows in a\n> > transaction.\n> \n> This only works up to the limit of the memory you have available for\n> Postgres. If the updates in one transaction exceed your available\n> memory, you'll see a lot of swaps to disk log that will slow things\n> down by a factor of 10-50 times.\n\nSorry, but that isn't true. MVCC means we don't have to hold all the data \nin memory, we can have multiple versions of the same tuples on disk, and \nuse memory for what it's meant for, buffering.\n\nThe performance gain \ncomes from the fact that postgresql doesn't have to perform the data \nconsistency checks needed during an insert until after all the rows are \ninserted, and it can \"gang check\" them/\n\n> > Reducing triggers and foreign keys on the inserted tables to a\n> > minimum \n> > helps.\n> \n> ... provided that this will not jeapordize your data integrity. If you\n> have indispensable triggers in PL/pgSQL, re-qriting them in C will make\n> them, and thus updates on their tables, faster.\n\nAgreed. But you've probably seen the occasional \"I wasn't sure if we \nneeded that check or not, so I threw it in just in case\" kind of database \ndesign. :-)\n\nI definitely don't advocate just tossing all your FKs to make it run \nfaster. \n\nAlso note that many folks have replaced foreign keys with triggers and \ngained in performance, as fks in pgsql still have some deadlock issues to \nbe worked out.\n\n> Also, for foriegn keys, it speeds up inserts and updates on parent\n> tables with many child records if the foriegn key column in the child\n> table is indexed.\n\nAbsolutely.\n\n> > Putting WAL (e.g. $PGDATA/pg_xlog directory) on it's own drive(s).\n> > \n> > Putting indexes that have to be updated during inserts onto their own\n> > \n> > drive(s).\n> > \n> > Performing regular vacuums on heavily updated tables.\n> > \n> > Also, if your hardware is reliable, you can turn off fsync in \n> > postgresql.conf. That can increase performance by anywhere from 2 to\n> > 10 \n> > times, depending on your application.\n> \n> It can be dangerous though ... in the event of a power outage, for\n> example, your database could be corrupted and difficult to recover. So\n> ... \"at your own risk\".\n\nNo, the database will not be corrupted, at least not in my experience. \nhowever, you MAY lose data from transactions that you thought were \ncommitted. I think Tom posted something about this a few days back.\n\n> I've found that switching from fsync to fdatasync on Linux yields\n> marginal performance gain ... about 10-20%.\n\nI'll have to try that.\n\n> Also, if you are doing large updates (many records at once) you may\n> want to increase WAL_FILES and CHECKPOINT_BUFFER in postgresql.conf to\n> allow for large transactions.\n\nActually, postgresql will create more WAL files if it needs to to handle \nthe size of a transaction. BUT, it won't create extra ones for heavier \nparallel load without being told to. I've inserted 100,000 rows at a \ntime with no problem on a machine with only 1 WAL file specified, and it \ndidn't burp. It does run faster having multiple wal files when under \nparallel load.\n\n> Finally, you want to structure your queries so that you do the minimum\n> number of update writes possible, or insert writes. For example, a\n> procedure that inserts a row, does some calculations, and then modifies\n> several fields in that row is going to slow stuff down significantly\n> compared to doing the calculations as variables and only a single\n> insert. Certainly don't hit a table with 8 updates, each updating one\n> field instead of a single update statement.\n\nThis is critical, and bites many people coming from a row level locking \ndatabase to an MVCC database. In MVCC every update creates a new on disk \ntuple. I think someone on the list a while back was updating their \ndatabase something like this:\n\nupdate table set field1='abc' where id=1;\nupdate table set field2='def' where id=1;\nupdate table set field3='ghi' where id=1;\nupdate table set field4='jkl' where id=1;\nupdate table set field5='mno' where id=1;\nupdate table set field6='pqr' where id=1;\n\nand they had to vacuum something like every 5 minutes.\n\nAlso, things like:\n\nupdate table set field1=field1+1\n\nare killers in an MVCC database as well.\n\n", "msg_date": "Thu, 21 Nov 2002 15:54:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nScott,\n\n> > This only works up to the limit of the memory you have available for\n> > Postgres. If the updates in one transaction exceed your available\n> > memory, you'll see a lot of swaps to disk log that will slow things\n> > down by a factor of 10-50 times.\n> \n> Sorry, but that isn't true. MVCC means we don't have to hold all the data \n> in memory, we can have multiple versions of the same tuples on disk, and \n> use memory for what it's meant for, buffering.\n\nSorry, you're absolutely correct. I don't know what I was thinking of; 's the \nproblem with an off-the-cuff response.\n\nPlease disregard the previous quote. Instead:\n\nDoing several large updates in a single transaction can lower performance if \nthe number of updates is sufficient to affect index usability and a VACUUM is \nreally needed between them. For example, a series of large data \ntransformation statements on a single table or set of related tables should \nhave VACCUUM statements between them, thus preventing you from putting them \nin a single transaction. \n\nExample, the series:\n1. INSERT 10,000 ROWS INTO table_a;\n2. UPDATE 100,000 ROWS IN table_a WHERE table_b;\n3. UPDATE 100,000 ROWS IN table_c WHERE table_a;\n\nWIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2), \nrequiring you to split the update series into 2 transactions. Otherwise, the \n\"where table_a\" condition in step 3) will be extremely slow.\n\n> Also note that many folks have replaced foreign keys with triggers and \n> gained in performance, as fks in pgsql still have some deadlock issues to \n> be worked out.\n\nYeah. I think Neil Conway is overhauling FKs, which everyone considers a bit \nof a hack in the current implementation, including Jan who wrote it.\n\n> > It can be dangerous though ... in the event of a power outage, for\n> > example, your database could be corrupted and difficult to recover. So\n> > ... \"at your own risk\".\n> \n> No, the database will not be corrupted, at least not in my experience. \n> however, you MAY lose data from transactions that you thought were \n> committed. I think Tom posted something about this a few days back.\n\nHmmm ... have you done this? I'd like the performance gain, but I don't want \nto risk my data integrity. I've seen some awful things in databases (such as \nduplicate primary keys) from yanking a power cord repeatedly.\n\n> update table set field1=field1+1\n> \n> are killers in an MVCC database as well.\n\nYeah -- don't I know it.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 21 Nov 2002 15:34:53 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Thu, 21 Nov 2002, Josh Berkus wrote:\n\n> Doing several large updates in a single transaction can lower performance if \n> the number of updates is sufficient to affect index usability and a VACUUM is \n> really needed between them. For example, a series of large data \n> transformation statements on a single table or set of related tables should \n> have VACCUUM statements between them, thus preventing you from putting them \n> in a single transaction. \n> \n> Example, the series:\n> 1. INSERT 10,000 ROWS INTO table_a;\n> 2. UPDATE 100,000 ROWS IN table_a WHERE table_b;\n> 3. UPDATE 100,000 ROWS IN table_c WHERE table_a;\n> \n> WIll almost certainly need a VACUUM or even VACUUM FULL table_a after 2), \n> requiring you to split the update series into 2 transactions. Otherwise, the \n> \"where table_a\" condition in step 3) will be extremely slow.\n\nVery good point. One that points out the different mind set one needs \nwhen dealing with pgsql.\n\n> > > It can be dangerous though ... in the event of a power outage, for\n> > > example, your database could be corrupted and difficult to recover. So\n> > > ... \"at your own risk\".\n> > \n> > No, the database will not be corrupted, at least not in my experience. \n> > however, you MAY lose data from transactions that you thought were \n> > committed. I think Tom posted something about this a few days back.\n> \n> Hmmm ... have you done this? I'd like the performance gain, but I don't want \n> to risk my data integrity. I've seen some awful things in databases (such as \n> duplicate primary keys) from yanking a power cord repeatedly.\n\nI have, with killall -9 postmaster, on several occasions during testing \nunder heavy parallel load. I've never had 7.2.x fail because of this.\n\n", "msg_date": "Fri, 22 Nov 2002 08:56:14 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> > > The absolutely most important thing to do to speed up inserts and\n> > > updates \n> > > is to squeeze as many as you can into one transaction. \n\nI was discussing this on IRC, and nobody could verify this assertion.\n Do you have an example of bunlding multiple writes into a transaction\ngiving a performance gain?\n\n-Josh\n", "msg_date": "Fri, 22 Nov 2002 20:18:22 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Fri, 2002-11-22 at 22:18, Josh Berkus wrote:\n> Scott,\n> \n> > > > The absolutely most important thing to do to speed up inserts and\n> > > > updates \n> > > > is to squeeze as many as you can into one transaction. \n> \n> I was discussing this on IRC, and nobody could verify this assertion.\n> Do you have an example of bunlding multiple writes into a transaction\n> giving a performance gain?\n\nUnfortunately, I missed the beginning of this thread, but I do\nknow that eliminating as many indexes as possible is the answer.\nIf I'm going to insert \"lots\" of rows in an off-line situation,\nthen I'll drop *all* of the indexes, load the data, then re-index.\nIf deleting \"lots\", then I'll drop all but the 1 relevant index,\nthen re-index afterwards.\n\nAs for bundling multiple statements into a transaction to increase\nperformance, I think the questions are:\n- how much disk IO does one BEGIN TRANSACTION do? If it *does*\n do disk IO, then \"bundling\" *will* be more efficient, since\n less disk IO will be performed.\n- are, for example, 500 COMMITs of small amounts of data more or \n less efficient than 1 COMMIT of a large chunk of data? On the\n proprietary database that I use at work, efficiency goes up,\n then levels off at ~100 inserts per transaction.\n\nRon\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "23 Nov 2002 09:06:00 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nRon,\n\n> As for bundling multiple statements into a transaction to increase\n> performance, I think the questions are:\n> - how much disk IO does one BEGIN TRANSACTION do? If it *does*\n> do disk IO, then \"bundling\" *will* be more efficient, since\n> less disk IO will be performed.\n> - are, for example, 500 COMMITs of small amounts of data more or \n> less efficient than 1 COMMIT of a large chunk of data? On the\n> proprietary database that I use at work, efficiency goes up,\n> then levels off at ~100 inserts per transaction.\n\nThat's because some commercial databases (MS SQL, Sybase) use an \"unwinding \ntransaction log\" method of updating. That is, during a transaction, changes \nare written only to the transaction log, and those changes are \"played\" to \nthe database only on a COMMIT. It's an approach that is more efficient for \nlarge transactions, but has the unfortuate side effect of *requiring* read \nand write row locks for the duration of the transaction.\n\nIn Postgres, with MVCC, changes are written to the database immediately with a \nnew transaction ID and the new rows are \"activated\" on COMMIT. So the \nchanges are written to the database as the statements are executed, \nregardless. This is less efficient for large transactions than the \n\"unwinding log\" method, but has the advantage of eliminating read locks \nentirely and most deadlock situations.\n\nUnder MVCC, then, I am not convinced that bundling a bunch of writes into one \ntransaction is faster until I see it demonstrated. I certainly see no \nperformance gain on my system.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 11:25:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Under MVCC, then, I am not convinced that bundling a bunch of writes into one\n> transaction is faster until I see it demonstrated. I certainly see no \n> performance gain on my system.\n\nAre you running with fsync off?\n\nThe main reason for bundling updates into larger transactions is that\neach transaction commit requires an fsync on the WAL log. If you have\nfsync enabled, it is physically impossible to commit transactions faster\nthan one per revolution of the WAL disk, no matter how small the\ntransactions. (*) So it pays to make the transactions larger, not smaller.\n\nOn my machine I see a sizable difference (more than 2x) in the rate at\nwhich simple INSERT statements are processed as separate transactions\nand as large batches --- if I have fsync on. With fsync off, nearly no\ndifference.\n\n\t\t\tregards, tom lane\n\n(*) See recent proposals from Curtis Faith in pgsql-hackers about how\nwe might circumvent that limit ... but it's there today.\n", "msg_date": "Sat, 23 Nov 2002 15:20:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> On my machine I see a sizable difference (more than 2x) in the rate at\n> which simple INSERT statements are processed as separate transactions\n> and as large batches --- if I have fsync on. With fsync off, nearly no\n> difference.\n\nI'm using fdatasych, which *does* perform faster than fsych on my system. \nCould this make the difference?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 12:31:50 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> On my machine I see a sizable difference (more than 2x) in the rate at\n>> which simple INSERT statements are processed as separate transactions\n>> and as large batches --- if I have fsync on. With fsync off, nearly no\n>> difference.\n\n> I'm using fdatasych, which *does* perform faster than fsych on my system. \n> Could this make the difference?\n\nNo; you still have to write the data and wait for the disk to spin.\n(FWIW, PG defaults to wal_sync_method = open_datasync on my system,\nand that's what I used in checking the speed just now. So I wasn't\nactually executing any fsync() calls either.)\n\nOn lots of PC hardware, the disks are configured to lie and report write\ncomplete as soon as they've accepted the data into their internal\nbuffers. If you see very little difference between fsync on and fsync\noff, or if you are able to benchmark transaction rates in excess of your\ndisk's RPM, you should suspect that your disk drive is lying to you.\n\nAs an example: in testing INSERT speed on my old HP box just now,\nI got measured rates of about 16000 inserts/minute with fsync off, and\n5700/min with fsync on (for 1 INSERT per transaction). Knowing that my\ndisk drive is 6000 RPM, the latter number is credible. On my PC I get\nnumbers way higher than the disk rotation rate :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 15:41:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> As an example: in testing INSERT speed on my old HP box just now,\n> I got measured rates of about 16000 inserts/minute with fsync off, and\n> 5700/min with fsync on (for 1 INSERT per transaction). Knowing that my\n> disk drive is 6000 RPM, the latter number is credible. On my PC I get\n> numbers way higher than the disk rotation rate :-(\n\nThanks for the info. As long as I have your ear, what's your opinion on the \nrisk level of running with fsynch off on a production system? I've seen a \nlot of posts on this list opining the lack of danger, but I'm a bit paranoid.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 12:48:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Thanks for the info. As long as I have your ear, what's your opinion on the \n> risk level of running with fsynch off on a production system?\n\nDepends on how much you trust your hardware, kernel, and power source.\n\nFsync off does not introduce any danger from Postgres crashes --- we\nalways write data out of userspace to the kernel before committing.\nThe question is whether writes can be relied on to get to disk once\nthe kernel has 'em.\n\nThere is a definite risk of data corruption (not just lost transactions,\nbut actively inconsistent database contents) if you suffer a\nsystem-level crash while running with fsync off. The theory of WAL\n(which remember means write *ahead* log) is that it protects you from\ndata corruption as long as WAL records always hit disk before the\nassociated changes in database data files do. Then after a crash you\ncan replay the WAL to make sure you have actually done all the changes\ndescribed by each readable WAL record, and presto you're consistent up\nto the end of the readable WAL. But if data file writes can get to disk\nin advance of their WAL record, you could have a situation where some\nbut not all changes described by a WAL record are in the database after\na system crash and recovery. This could mean incompletely applied\ntransactions, broken indexes, or who knows what.\n\nWhen you get right down to it, what we use fsync for is to force write\nordering --- Unix kernels do not guarantee write ordering any other way.\nWe use it to ensure WAL records hit disk before data file changes do.\n\nBottom line: I wouldn't run with fsync off in a mission-critical\ndatabase. If you're prepared to accept a risk of having to restore from\nyour last backup after a system crash, maybe it's okay.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 16:20:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "\nTom,\n\n> When you get right down to it, what we use fsync for is to force write\n> ordering --- Unix kernels do not guarantee write ordering any other way.\n> We use it to ensure WAL records hit disk before data file changes do.\n> \n> Bottom line: I wouldn't run with fsync off in a mission-critical\n> database. If you're prepared to accept a risk of having to restore from\n> your last backup after a system crash, maybe it's okay.\n\nThanks for that overview. Sadly, even with fsynch on, I was forced to restore \nfrom backup because the data needs to be 100% reliable and the crash was due \nto a disk lockup on a checkpoint ... beyond the ability of WAL to deal with, \nI think.\n\nOne last, last question: I was just asked a question on IRC, and I can't find \ndocs defining fsynch, fdatasynch, opensynch, and opendatasynch beyond section \n11.3 which just says that they are all synch methods. Are there docs?\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 23 Nov 2002 13:29:20 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> One last, last question: I was just asked a question on IRC, and I\n> can't find docs defining fsynch, fdatasynch, opensynch, and\n> opendatasynch beyond section 11.3 which just says that they are all\n> synch methods. Are there docs?\n\nSection 11.3 of what?\n\nThe only mention of open_datasync that I see in the docs is in the\nAdmin Guide chapter 3:\nhttp://developer.postgresql.org/docs/postgres/runtime-config.html#RUNTIME-CONFIG-WAL\n\nwhich saith\n\nWAL_SYNC_METHOD (string)\n\n Method used for forcing WAL updates out to disk. Possible values\n are FSYNC (call fsync() at each commit), FDATASYNC (call\n fdatasync() at each commit), OPEN_SYNC (write WAL files with open()\n option O_SYNC), or OPEN_DATASYNC (write WAL files with open()\n option O_DSYNC). Not all of these choices are available on all\n platforms. This option can only be set at server start or in the\n postgresql.conf file.\n\nThis may not help you much to decide which to use :-(, but it does tell\nyou what they are.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 23 Nov 2002 16:41:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "On Fri, 22 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > > > The absolutely most important thing to do to speed up inserts and\n> > > > updates \n> > > > is to squeeze as many as you can into one transaction. \n> \n> I was discussing this on IRC, and nobody could verify this assertion.\n> Do you have an example of bunlding multiple writes into a transaction\n> giving a performance gain?\n\nYes, my own experience.\n\nIt's quite easy to test if you have a database with a large table to play \nwith, use pg_dump to dump a table with the -d switch (makes the dump use \ninsert statements.) Then, make two versions of the dump, one which has a \nbegin;end; pair around all the inserts and one that doesn't, then use psql \n-e to restore both dumps. The difference is HUGE. Around 10 to 20 times \nfaster with the begin end pairs. \n\nI'd think that anyone who's used postgresql for more than a few months \ncould corroborate my experience.\n\n", "msg_date": "Mon, 25 Nov 2002 09:31:20 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> It's quite easy to test if you have a database with a large table to play \n> with, use pg_dump to dump a table with the -d switch (makes the dump use \n> insert statements.) Then, make two versions of the dump, one which has a \n> begin;end; pair around all the inserts and one that doesn't, then use psql \n> -e to restore both dumps. The difference is HUGE. Around 10 to 20 times \n> faster with the begin end pairs. \n> \n> I'd think that anyone who's used postgresql for more than a few months \n> could corroborate my experience.\n\nOuch! \n\nNo need to get testy about it. \n\nYour test works as you said; the way I tried testing it before was different. \nGood to know. However, this approach is only useful if you are doing \nrapidfire updates or inserts coming off a single connection. But then it is \n*very* useful.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 25 Nov 2002 14:33:07 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, Josh Berkus wrote:\n\n> Scott,\n> \n> > It's quite easy to test if you have a database with a large table to play \n> > with, use pg_dump to dump a table with the -d switch (makes the dump use \n> > insert statements.) Then, make two versions of the dump, one which has a \n> > begin;end; pair around all the inserts and one that doesn't, then use psql \n> > -e to restore both dumps. The difference is HUGE. Around 10 to 20 times \n> > faster with the begin end pairs. \n> > \n> > I'd think that anyone who's used postgresql for more than a few months \n> > could corroborate my experience.\n> \n> Ouch! \n> \n> No need to get testy about it. \n> \n> Your test works as you said; the way I tried testing it before was different. \n> Good to know. However, this approach is only useful if you are doing \n> rapidfire updates or inserts coming off a single connection. But then it is \n> *very* useful.\n\nI didn't mean that in a testy way, it's just that after you've sat through \na fifteen minute wait while a 1000 records are inserted, you pretty \nquickly switch to the method of inserting them all in one big \ntransaction. That's all.\n\nNote that the opposite is what really gets people in trouble. I've seen \nfolks inserting rather large amounts of data, say into ten or 15 tables, \nand their web servers were crawling under parallel load. Then, they put \nthem into a single transaction and they just flew.\n\nThe funny thing it, they've often avoided transactions because they \nfigured they'd be slower than just inserting the rows, and you kinda have \nto make them sit down first before you show them the performance increase \nfrom putting all those inserts into a single transaction.\n\nNo offense meant, really. It's just that you seemed to really doubt that \nputting things into one transaction helped, and putting things into one \nbig transaction if like the very first postgresql lesson a lot of \nnewcomers learn. :-)\n\n", "msg_date": "Mon, 25 Nov 2002 15:59:16 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": ">The funny thing it, they've often avoided transactions because they\n>figured they'd be slower than just inserting the rows, and you kinda have\n>to make them sit down first before you show them the performance increase\n>from putting all those inserts into a single transaction.\n>\n>No offense meant, really. It's just that you seemed to really doubt that\n>putting things into one transaction helped, and putting things into one\n>big transaction if like the very first postgresql lesson a lot of\n>newcomers learn. :-)\n\nScott,\n\nI'm new to postgresql, and as you suggested, this is \ncounter-intuitive to me. I would have thought that having to store \nall the inserts to be able to roll them back would take longer. Is \nmy thinking wrong or not relevant? Why is this not the case?\n\nThanks,\nTim\n", "msg_date": "Mon, 25 Nov 2002 18:41:53 -0500", "msg_from": "Tim Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "> I'm new to postgresql, and as you suggested, this is \n> counter-intuitive to me. I would have thought that having to store \n> all the inserts to be able to roll them back would take longer. Is \n> my thinking wrong or not relevant? Why is this not the case?\n\nTypically that is the case. But Postgresql switches it around a little\nbit. Different trade-offs. No rollback log, but other processes are\nforced to go through you're left over garbage (hence 'vacuum').\n\nIt's still kinda slow with hundreds of connections (as compared to\nOracle) -- but close enough that a license fee -> hardware purchase\nfunds transfer more than makes up for it.\n\nGet yourself a 1GB battery backed ramdisk on it's own scsi chain for WAL\nand it'll fly no matter what size of transaction you use ;)\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "25 Nov 2002 19:20:03 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, Tim Gardner wrote:\n\n> >The funny thing it, they've often avoided transactions because they\n> >figured they'd be slower than just inserting the rows, and you kinda have\n> >to make them sit down first before you show them the performance increase\n> >from putting all those inserts into a single transaction.\n> >\n> >No offense meant, really. It's just that you seemed to really doubt that\n> >putting things into one transaction helped, and putting things into one\n> >big transaction if like the very first postgresql lesson a lot of\n> >newcomers learn. :-)\n> \n> Scott,\n> \n> I'm new to postgresql, and as you suggested, this is \n> counter-intuitive to me. I would have thought that having to store \n> all the inserts to be able to roll them back would take longer. Is \n> my thinking wrong or not relevant? Why is this not the case?\n\nYour thinking on this is wrong, and it is counter-intuitive to think that \na transaction would speed things up. Postgresql is very different from \nother databases.\n\nPostgresql was designed from day one as a transactional database. Which \nis why it was so bothersome that an Oracle marketroid recently was telling \nthe .org folks why they shouldn't use Postgresql because it didn't have \ntransactions. Postgresql may have a few warts here and there, but not \nsupporting transactions has NEVER been a problem for it.\n\nThere are two factors that make Postgresql so weird in regards to \ntransactions. One it that everything happens in a transaction (we won't \nmention truncate for a while, it's the only exception I know of.)\n\nThe next factor that makes for fast inserts of large amounts of data in a \ntransaction is MVCC. With Oracle and many other databases, transactions \nare written into a seperate log file, and when you commit, they are \ninserted into the database as one big group. This means you write your \ndata twice, once into the transaction log, and once into the database.\n\nWith Postgresql's implementation of MVCC, all your data are inserted in \nreal time, with a transaction date that makes the other clients ignore \nthem (mostly, other read committed transactions may or may not see them.)\n\nIf there are indexes to update, they are updated in the same \"invisible \nuntil committed\" way.\n\nAll this means that your inserts don't block anyone else's reads as well.\n\nThis means that when you commit, all postgresql does is make them visible.\n\nIn the event you roll back a transaction, the tuples are all just marked \nas dead and they get ignored.\n\nIt's interesting when you work with folks who came from other databases. \nMy coworker, who's been using Postgresql for about 2 years now, had an \ninteresting experience when he first started here. He was inserting \nsomething like 10,000 rows. He comes over and tells me there must be \nsomething wrong with the database, as his inserts have been running for 10 \nminutes, and he's not even halfway through. So I had him stop the \ninserts, clean out the rows (it was a new table for a new project) and \nwrap all 10,000 inserts into a transaction. What had been running for 10 \nminutes now ran in about 30 seconds.\n\nHe was floored. \n\nWell, good luck on using postgresql, and definitely keep in touch with the \nperformance and general mailing lists. They're a wealth of useful info.\n\n", "msg_date": "Mon, 25 Nov 2002 17:23:57 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On 25 Nov 2002, Rod Taylor wrote:\n\n> > I'm new to postgresql, and as you suggested, this is \n> > counter-intuitive to me. I would have thought that having to store \n> > all the inserts to be able to roll them back would take longer. Is \n> > my thinking wrong or not relevant? Why is this not the case?\n> \n> Typically that is the case. But Postgresql switches it around a little\n> bit. Different trade-offs. No rollback log, but other processes are\n> forced to go through you're left over garbage (hence 'vacuum').\n\nYeah, which means you always need to do a vacuum on a table after a lot of \nupdates/deletes. And analyze after a lot of inserts/updates/deletes.\n\n> It's still kinda slow with hundreds of connections (as compared to\n> Oracle) -- but close enough that a license fee -> hardware purchase\n> funds transfer more than makes up for it.\n\nAin't it amazing how much hardware a typical Oracle license can buy? ;^)\n\nHeck, even the license cost MS-SQL server is enough to buy a nice quad \nXeon with all the trimmings nowadays. Then you can probably still have \nenough left over for one of the pgsql developers to fly out and train your \nfolks on it.\n\n", "msg_date": "Mon, 25 Nov 2002 17:30:00 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": ">With Postgresql's implementation of MVCC, all your data are inserted in\n>real time, with a transaction date that makes the other clients ignore\n>them (mostly, other read committed transactions may or may not see them.)\n>\n>If there are indexes to update, they are updated in the same \"invisible\n>until committed\" way.\n>\n>All this means that your inserts don't block anyone else's reads as well.\n>\n>This means that when you commit, all postgresql does is make them visible.\n\nscott,\n\nExactly the kind of explanation/understanding I was hoping for!\n\nThank you!\n\nTim\n", "msg_date": "Mon, 25 Nov 2002 19:40:43 -0500", "msg_from": "Tim Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 25 Nov 2002, scott.marlowe wrote:\n\n> On Mon, 25 Nov 2002, Tim Gardner wrote:\n> \n> > I'm new to postgresql, and as you suggested, this is \n> > counter-intuitive to me. I would have thought that having to store \n> > all the inserts to be able to roll them back would take longer. Is \n> > my thinking wrong or not relevant? Why is this not the case?\n> \n> Your thinking on this is wrong, and it is counter-intuitive to think that \n> a transaction would speed things up. Postgresql is very different from \n> other databases.\n\nSorry that came out like that, I meant to write:\n\nI meant to add in there that I thought the same way at first, and only \nafter a little trial and much error did I realize that I was thinking in \nterms of how other databases did things. I.e. most people make the same \nmistake when starting out with pgsql.\n\n", "msg_date": "Mon, 25 Nov 2002 17:41:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> On Mon, 25 Nov 2002, Tim Gardner wrote:\n> \n[snip]\n> \n> There are two factors that make Postgresql so weird in regards to \n> transactions. One it that everything happens in a transaction (we won't \n> mention truncate for a while, it's the only exception I know of.)\n\nWhy is this so weird? Do I use the /other/ weird RDBMS? (Rdb/VMS)\n\n> The next factor that makes for fast inserts of large amounts of data in a \n> transaction is MVCC. With Oracle and many other databases, transactions \n> are written into a seperate log file, and when you commit, they are \n> inserted into the database as one big group. This means you write your \n> data twice, once into the transaction log, and once into the database.\n\nYou are just deferring the pain. Whereas others must flush from log\nto \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n\n> With Postgresql's implementation of MVCC, all your data are inserted in \n> real time, with a transaction date that makes the other clients ignore \n> them (mostly, other read committed transactions may or may not see them.)\n\nIs this unusual? (Except that Rdb/VMS uses a 64-bit integer (a\nTransaction Sequence Number) instead of a timestamp, because Rdb,\ncominging from VAX/VMS is natively cluster-aware, and it's not\nguaranteed that all nodes have the exact same timestamp.\n\n[snip]\n> In the event you roll back a transaction, the tuples are all just marked \n> as dead and they get ignored.\n\nWhat if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\nreally slow down the nightly operations.\n\n> It's interesting when you work with folks who came from other databases. \n> My coworker, who's been using Postgresql for about 2 years now, had an \n> interesting experience when he first started here. He was inserting \n> something like 10,000 rows. He comes over and tells me there must be \n> something wrong with the database, as his inserts have been running for 10 \n> minutes, and he's not even halfway through. So I had him stop the \n> inserts, clean out the rows (it was a new table for a new project) and \n> wrap all 10,000 inserts into a transaction. What had been running for 10 \n> minutes now ran in about 30 seconds.\n\nAgain, why is this so unusual?????\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "25 Nov 2002 19:41:03 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Scott,\n\n> No offense meant, really. It's just that you seemed to really doubt that \n> putting things into one transaction helped, and putting things into one \n> big transaction if like the very first postgresql lesson a lot of \n> newcomers learn. :-)\n\nNot so odd, if you think about it. After all, this approach is only useful \nfor a series of small update/insert statements on a single connection. \nThinking about it, I frankly never do this except as part of a stored \nprocedure ... which, in Postgres, is automatically a transaction.\n\nI'm lucky enough that my data loads have all been adaptable to COPY \nstatements, which bypasses this issue completely.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 25 Nov 2002 17:41:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n> On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n>> The next factor that makes for fast inserts of large amounts of data in a \n>> transaction is MVCC. With Oracle and many other databases, transactions \n>> are written into a seperate log file, and when you commit, they are \n>> inserted into the database as one big group. This means you write your \n>> data twice, once into the transaction log, and once into the database.\n\n> You are just deferring the pain. Whereas others must flush from log\n> to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n\nSure, it's just shuffling the housekeeping work from one place to\nanother. The thing that I like about Postgres' approach is that we\nput the housekeeping in a background task (VACUUM) rather than in the\ncritical path of foreground transaction commit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Nov 2002 22:30:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "Tim Gardner <[email protected]> writes:\n>> All this means that your inserts don't block anyone else's reads as well.\n>> This means that when you commit, all postgresql does is make them visible.\n\n> Exactly the kind of explanation/understanding I was hoping for!\n\nThere's another point worth making. What Scott was pointing out is that\nwhether you commit or roll back a transaction costs about the same, in\nPostgres, as far as tuple update processing is concerned. At the end of\na transaction, we have both new (inserted/updated) and old\n(deleted/replaced) tuples laying about in the database. Commit marks\nthe transaction committed in pg_clog; abort marks it aborted instead;\nneither one lifts a finger to touch the tuples. (Subsequent visitors\nto the tuples will mark them \"good\" or \"dead\" based on consulting\npg_clog, but we don't try to do that during transaction commit.)\n\nBut having said all that, transaction commit is more expensive than\ntransaction abort, because we have to flush the transaction commit\nWAL record to disk before we can report \"transaction successfully\ncommitted\". That means waiting for the disk to spin. Transaction abort\ndoesn't have to wait --- that's because if there's a crash and the abort\nrecord never makes it to disk, the default assumption on restart will be\nthat the transaction aborted, anyway.\n\nSo the basic reason that it's worth batching multiple updates into one\ntransaction is that you only wait for the commit record flush once,\nnot once per update. This makes no difference worth mentioning if your\nupdates are big, but on modern hardware you can update quite a few\nindividual rows in the time it takes the disk to spin once.\n\n(BTW, if you set fsync = off, then the performance difference goes away,\nbecause we don't wait for the commit record to flush to disk ... but\nthen you become vulnerable to problems after a system crash.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Nov 2002 22:44:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update " }, { "msg_contents": "On Mon, 2002-11-25 at 19:30, scott.marlowe wrote:\n> On 25 Nov 2002, Rod Taylor wrote:\n> \n> > > I'm new to postgresql, and as you suggested, this is \n> > > counter-intuitive to me. I would have thought that having to store \n> > > all the inserts to be able to roll them back would take longer. Is \n> > > my thinking wrong or not relevant? Why is this not the case?\n> > \n> > Typically that is the case. But Postgresql switches it around a little\n> > bit. Different trade-offs. No rollback log, but other processes are\n> > forced to go through you're left over garbage (hence 'vacuum').\n> \n> Yeah, which means you always need to do a vacuum on a table after a lot of \n> updates/deletes. And analyze after a lot of inserts/updates/deletes.\n\nA good auto-vacuum daemon will help that out :) Not really any\ndifferent than an OO dbs garbage collection process -- except PGs vacuum\nis several orders of magnitude faster.\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "25 Nov 2002 22:52:51 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 21:30, Tom Lane wrote:\n> Ron Johnson <[email protected]> writes:\n> > On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> >> The next factor that makes for fast inserts of large amounts of data in a \n> >> transaction is MVCC. With Oracle and many other databases, transactions \n> >> are written into a seperate log file, and when you commit, they are \n> >> inserted into the database as one big group. This means you write your \n> >> data twice, once into the transaction log, and once into the database.\n> \n> > You are just deferring the pain. Whereas others must flush from log\n> > to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n> \n> Sure, it's just shuffling the housekeeping work from one place to\n> another. The thing that I like about Postgres' approach is that we\n> put the housekeeping in a background task (VACUUM) rather than in the\n> critical path of foreground transaction commit.\n\nIf you have a quiescent point somewhere in the middle of the night...\n\nIt's all about differing philosophies, though, and there's no way\nthat Oracle will re-write Rdb/VMS (they bought it from DEC in 1997\nfor it's high-volume OLTP technolgies) and you all won't re-write\nPostgres...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "25 Nov 2002 22:27:41 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "tom lane wrote:\n> Sure, it's just shuffling the housekeeping work from one place to\n> another. The thing that I like about Postgres' approach is that we\n> put the housekeeping in a background task (VACUUM) rather than in the\n> critical path of foreground transaction commit.\n\nThinking with my marketing hat on, MVCC would be a much bigger win if VACUUM\nwas not required (or was done automagically). The need for periodic VACUUM\njust gives ammunition to the PostgreSQL opponents who can claim we are\ndeferring work but that it amounts to the same thing.\n\nA fully automatic background VACUUM will significantly reduce but will not\neliminate this perceived weakness.\n\nHowever, it always seemed to me there should be some way to reuse the space\nmore dynamically and quickly than a background VACUUM thereby reducing the\npercentage of tuples that are expired in heavy update cases. If only a very\ntiny number of tuples on the disk are expired this will reduce the aggregate\nperformance/space penalty of MVCC into insignificance for the majority of\nuses.\n\nCouldn't we reuse tuple and index space as soon as there are no transactions\nthat depend on the old tuple or index values. I have imagined that this was\nalways part of the long-term master plan.\n\nCouldn't we keep a list of dead tuples in shared memory and look in the list\nfirst when deciding where to place new values for inserts or updates so we\ndon't have to rely on VACUUM (even a background one)? If there are expired\ntuple slots in the list these would be used before allocating a new slot from\nthe tuple heap.\n\nThe only issue is determining the lowest transaction ID for in-process\ntransactions which seems relatively easy to do (if it's not already done\nsomewhere).\n\nIn the normal shutdown and startup case, a tuple VACUUM could be performed\nautomatically. This would normally be very fast since there would not be many\ntuples in the list.\n\nIndex slots would be handled differently since these cannot be substituted\none for another. However, these could be recovered as part of every index\npage update. Pages would be scanned before being written and any expired\nslots that had transaction ID's lower than the lowest active slot would be\nremoved. This could be done for non-leaf pages as well and would result in\nonly reorganizing a page that is already going to be written thereby not\nadding much to the overall work.\n\nI don't think that internal pages that contain pointers to values in nodes\nfurther down the tree that are no longer in the leaf nodes because of this\npartial expired entry elimination will cause a problem since searches and\nscans will still work fine.\n\nDoes VACUUM do something that could not be handled in this realtime manner?\n\n- Curtis\n\n\n", "msg_date": "Tue, 26 Nov 2002 11:32:28 -0400", "msg_from": "\"Curtis Faith\" <[email protected]>", "msg_from_op": false, "msg_subject": "[HACKERS] Realtime VACUUM, was: performance of insert/delete/update " }, { "msg_contents": "On Mon, Nov 25, 2002 at 05:30:00PM -0700, scott.marlowe wrote:\n> \n> Ain't it amazing how much hardware a typical Oracle license can buy? ;^)\n\nNot to mention the hardware budget for a typical Oracle installation.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 11:49:18 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, Nov 25, 2002 at 07:41:03PM -0600, Ron Johnson wrote:\n> \n> What if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\n> really slow down the nightly operations.\n\nWhy? After upgrading to 7.2, we find it a good idea to do frequent\nvacuum analyse on frequently-changed tables. It doesn't block, and\nif you vacuum frequently enough, it goes real fast.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 11:54:17 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, 26 Nov 2002, Andrew Sullivan wrote:\n\n> On Mon, Nov 25, 2002 at 07:41:03PM -0600, Ron Johnson wrote:\n> > \n> > What if you are in a 24x365 environment? Doing a VACUUM ANALYZE would\n> > really slow down the nightly operations.\n> \n> Why? After upgrading to 7.2, we find it a good idea to do frequent\n> vacuum analyse on frequently-changed tables. It doesn't block, and\n> if you vacuum frequently enough, it goes real fast.\n\nFor example, I just ran pgbench -c 20 -t 200 (20 concurrent's) with a \nscript in the background that looked like this:\n\n#!/bin/bash\nfor ((a=0;a=1;a=0)) do {\n vacuumdb -z postgres\n}\ndone\n\n(i.e. run vacuumdb in analyze against the database continuously.)\n\nOutput of top:\n\n71 processes: 63 sleeping, 8 running, 0 zombie, 0 stopped\nCPU0 states: 66.2% user, 25.1% system, 0.0% nice, 8.1% idle\nCPU1 states: 79.4% user, 18.3% system, 0.0% nice, 1.2% idle\nMem: 254660K av, 249304K used, 5356K free, 26736K shrd, 21720K \nbuff\nSwap: 3084272K av, 1300K used, 3082972K free 142396K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n21381 postgres 11 0 1304 1304 868 S 10.8 0.5 0:00 pgbench\n21393 postgres 14 0 4832 4832 4116 R 8.4 1.8 0:00 postmaster\n21390 postgres 9 0 4880 4880 4164 S 7.8 1.9 0:00 postmaster\n21385 postgres 14 0 4884 4884 4168 R 6.7 1.9 0:00 postmaster\n21399 postgres 9 0 4768 4768 4076 S 6.3 1.8 0:00 postmaster\n21402 postgres 9 0 4776 4776 4076 S 6.1 1.8 0:00 postmaster\n21383 postgres 14 0 4828 4828 4112 R 5.9 1.8 0:00 postmaster\n21386 postgres 14 0 4872 4872 4156 R 5.9 1.9 0:00 postmaster\n21392 postgres 9 0 4820 4820 4104 S 5.9 1.8 0:00 postmaster\n21409 postgres 11 0 4600 4600 3544 R 5.8 1.8 0:00 postmaster\n21387 postgres 9 0 4824 4824 4108 S 5.4 1.8 0:00 postmaster\n21394 postgres 9 0 4808 4808 4092 S 5.4 1.8 0:00 postmaster\n21391 postgres 9 0 4816 4816 4100 S 5.0 1.8 0:00 postmaster\n21398 postgres 9 0 4796 4796 4088 S 5.0 1.8 0:00 postmaster\n21384 postgres 9 0 4756 4756 4040 R 4.8 1.8 0:00 postmaster\n21389 postgres 9 0 4788 4788 4072 S 4.8 1.8 0:00 postmaster\n21397 postgres 9 0 4772 4772 4056 S 4.6 1.8 0:00 postmaster\n21388 postgres 9 0 4780 4780 4064 S 4.4 1.8 0:00 postmaster\n21396 postgres 9 0 4756 4756 4040 S 4.3 1.8 0:00 postmaster\n21395 postgres 14 0 4760 4760 4044 S 4.1 1.8 0:00 postmaster\n21401 postgres 14 0 4736 4736 4036 R 4.1 1.8 0:00 postmaster\n21400 postgres 9 0 4732 4732 4028 S 2.9 1.8 0:00 postmaster\n21403 postgres 9 0 1000 1000 820 S 2.4 0.3 0:00 vacuumdb\n21036 postgres 9 0 1056 1056 828 R 2.0 0.4 0:27 top\n18615 postgres 9 0 1912 1912 1820 S 1.1 0.7 0:01 postmaster\n21408 postgres 9 0 988 988 804 S 0.7 0.3 0:00 psql\n\nSo, pgbench is the big eater of CPU at 10%, each postmaster using about \n5%, and vacuumdb using 2.4%. Note that after a second, the vacuumdb use \ndrops off to 0% until it finishes and runs again. The output of the \npgbench without vacuumdb running, but with top, to be fair was:\n\nnumber of clients: 20\nnumber of transactions per client: 200\nnumber of transactions actually processed: 4000/4000\ntps = 54.428632 (including connections establishing)\ntps = 54.847276 (excluding connections establishing)\n\nWhile the output with the vacuumdb running continuously was:\n\nnumber of clients: 20\nnumber of transactions per client: 200\nnumber of transactions actually processed: 4000/4000\ntps = 52.114343 (including connections establishing)\ntps = 52.873435 (excluding connections establishing)\n\nSo, the difference in performance was around 4% slower.\n\nI'd hardly consider that a big hit against the database.\n\nNote that in every test I've made up and run, the difference is at most 5% \nwith vacuumdb -z running continuously in the background. Big text fields, \nlots of math, lots of fks, etc...\n\nYes, vacuum WAS a problem long ago, but since 7.2 came out it's only a \n\"problem\" in terms of remember to run it.\n\n", "msg_date": "Tue, 26 Nov 2002 11:06:47 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, Nov 26, 2002 at 11:06:47AM -0700, scott.marlowe wrote:\n> So, the difference in performance was around 4% slower.\n> \n> I'd hardly consider that a big hit against the database.\n> \n> Note that in every test I've made up and run, the difference is at most 5% \n> with vacuumdb -z running continuously in the background. Big text fields, \n> lots of math, lots of fks, etc...\n\nAlso, it's important to remember that you may see a considerable\nimprovement in efficiency of some queries if you vacuum often, (it's\npartly dependent on the turnover in your database -- if it never\nchanges, you don't need to vacuum often). So a 5% hit in regular\nperformance may be worth it over the long haul, if certain queries\nare way cheaper to run. (That is, while you may get 4% slower\nperformance overall, if the really slow queries are much faster, the\nfast queries running slower may well be worth it. In my case,\ncertainly, I think it is.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 26 Nov 2002 13:24:39 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Tue, 26 Nov 2002, Andrew Sullivan wrote:\n\n> On Tue, Nov 26, 2002 at 11:06:47AM -0700, scott.marlowe wrote:\n> > So, the difference in performance was around 4% slower.\n> > \n> > I'd hardly consider that a big hit against the database.\n> > \n> > Note that in every test I've made up and run, the difference is at most 5% \n> > with vacuumdb -z running continuously in the background. Big text fields, \n> > lots of math, lots of fks, etc...\n> \n> Also, it's important to remember that you may see a considerable\n> improvement in efficiency of some queries if you vacuum often, (it's\n> partly dependent on the turnover in your database -- if it never\n> changes, you don't need to vacuum often). So a 5% hit in regular\n> performance may be worth it over the long haul, if certain queries\n> are way cheaper to run. (That is, while you may get 4% slower\n> performance overall, if the really slow queries are much faster, the\n> fast queries running slower may well be worth it. In my case,\n> certainly, I think it is.)\n\nAgreed. We used to run vacuumdb at night only when we were running 7.1, \nand we had a script top detect if it had hung or anything. I.e. vacuuming \nwas still a semi-dangerous activity. I now have it set to run every hour \n(-z -a switches to vacuumdb). I'd run it more often but we just don't \nhave enough load to warrant it.\n\n", "msg_date": "Tue, 26 Nov 2002 11:46:27 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\nGood ideas. I think the master solution is to hook the statistics\ndaemon information into an automatic vacuum that could _know_ which\ntables need attention.\n\n---------------------------------------------------------------------------\n\nCurtis Faith wrote:\n> tom lane wrote:\n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n> \n> Thinking with my marketing hat on, MVCC would be a much bigger win if VACUUM\n> was not required (or was done automagically). The need for periodic VACUUM\n> just gives ammunition to the PostgreSQL opponents who can claim we are\n> deferring work but that it amounts to the same thing.\n> \n> A fully automatic background VACUUM will significantly reduce but will not\n> eliminate this perceived weakness.\n> \n> However, it always seemed to me there should be some way to reuse the space\n> more dynamically and quickly than a background VACUUM thereby reducing the\n> percentage of tuples that are expired in heavy update cases. If only a very\n> tiny number of tuples on the disk are expired this will reduce the aggregate\n> performance/space penalty of MVCC into insignificance for the majority of\n> uses.\n> \n> Couldn't we reuse tuple and index space as soon as there are no transactions\n> that depend on the old tuple or index values. I have imagined that this was\n> always part of the long-term master plan.\n> \n> Couldn't we keep a list of dead tuples in shared memory and look in the list\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)? If there are expired\n> tuple slots in the list these would be used before allocating a new slot from\n> the tuple heap.\n> \n> The only issue is determining the lowest transaction ID for in-process\n> transactions which seems relatively easy to do (if it's not already done\n> somewhere).\n> \n> In the normal shutdown and startup case, a tuple VACUUM could be performed\n> automatically. This would normally be very fast since there would not be many\n> tuples in the list.\n> \n> Index slots would be handled differently since these cannot be substituted\n> one for another. However, these could be recovered as part of every index\n> page update. Pages would be scanned before being written and any expired\n> slots that had transaction ID's lower than the lowest active slot would be\n> removed. This could be done for non-leaf pages as well and would result in\n> only reorganizing a page that is already going to be written thereby not\n> adding much to the overall work.\n> \n> I don't think that internal pages that contain pointers to values in nodes\n> further down the tree that are no longer in the leaf nodes because of this\n> partial expired entry elimination will cause a problem since searches and\n> scans will still work fine.\n> \n> Does VACUUM do something that could not be handled in this realtime manner?\n> \n> - Curtis\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Nov 2002 14:09:38 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 23:27, Ron Johnson wrote:\n> On Mon, 2002-11-25 at 21:30, Tom Lane wrote:\n> > Ron Johnson <[email protected]> writes:\n> > > On Mon, 2002-11-25 at 18:23, scott.marlowe wrote:\n> > >> The next factor that makes for fast inserts of large amounts of data in a \n> > >> transaction is MVCC. With Oracle and many other databases, transactions \n> > >> are written into a seperate log file, and when you commit, they are \n> > >> inserted into the database as one big group. This means you write your \n> > >> data twice, once into the transaction log, and once into the database.\n> > \n> > > You are just deferring the pain. Whereas others must flush from log\n> > > to \"database files\", they do not have to VACUUM or VACUUM ANALYZE.\n> > \n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n> \n> If you have a quiescent point somewhere in the middle of the night...\n> \n\nYou seem to be implying that running vacuum analyze causes some large\nperformance issues, but it's just not the case. I run a 24x7 operation,\nand I have a few tables that \"turn over\" within 15 minutes. On these\ntables I run vacuum analyze every 5 - 10 minutes and really there is\nlittle/no performance penalty. \n\nRobert Treat\n\n\n", "msg_date": "26 Nov 2002 14:25:46 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "\"Curtis Faith\" <[email protected]> writes:\n> tom lane wrote:\n>> Sure, it's just shuffling the housekeeping work from one place to\n>> another. The thing that I like about Postgres' approach is that we\n>> put the housekeeping in a background task (VACUUM) rather than in the\n>> critical path of foreground transaction commit.\n\n> Couldn't we reuse tuple and index space as soon as there are no transactions\n> that depend on the old tuple or index values. I have imagined that this was\n> always part of the long-term master plan.\n> Couldn't we keep a list of dead tuples in shared memory and look in the list\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)?\n\nISTM that either of these ideas would lead to pushing VACUUM overhead\ninto the foreground transactions, which is exactly what we don't want to\ndo. Keep in mind also that shared memory is finite ... *very* finite.\nIt's bad enough trying to keep per-page status in there (cf FSM) ---\nper-tuple status is right out.\n\nI agree that automatic background VACUUMing would go a long way towards\nreducing operational problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Nov 2002 22:13:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "I always wandered if VACUUM is the right name for the porcess. Now, when\nPostgreSQL\nis actively challenging in Enterprise space, it might be a good idea to give\nit a more\nenterprise-like name. Try to think how it is looking for an outside person\nto see\nus, database professionals hold lenghty discussions about the ways we\nvacuum a database. Why should you need to vacuum a database? Is it\ndirty? In my personal opinion, something like \"space reclaiming daemon\",\n\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\nwould\nsound more business-like .\n\nRegards,\nNick\n\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <[email protected]>\nTo: \"Curtis Faith\" <[email protected]>\nCc: \"Tom Lane\" <[email protected]>; \"Ron Johnson\" <[email protected]>;\n\"PgSQL Performance ML\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, November 26, 2002 9:09 PM\nSubject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\ninsert/delete/update\n\n\n>\n> Good ideas. I think the master solution is to hook the statistics\n> daemon information into an automatic vacuum that could _know_ which\n> tables need attention.\n>\n> --------------------------------------------------------------------------\n-\n>\n> Curtis Faith wrote:\n> > tom lane wrote:\n> > > Sure, it's just shuffling the housekeeping work from one place to\n> > > another. The thing that I like about Postgres' approach is that we\n> > > put the housekeeping in a background task (VACUUM) rather than in the\n> > > critical path of foreground transaction commit.\n> >\n> > Thinking with my marketing hat on, MVCC would be a much bigger win if\nVACUUM\n> > was not required (or was done automagically). The need for periodic\nVACUUM\n> > just gives ammunition to the PostgreSQL opponents who can claim we are\n> > deferring work but that it amounts to the same thing.\n> >\n> > A fully automatic background VACUUM will significantly reduce but will\nnot\n> > eliminate this perceived weakness.\n> >\n> > However, it always seemed to me there should be some way to reuse the\nspace\n> > more dynamically and quickly than a background VACUUM thereby reducing\nthe\n> > percentage of tuples that are expired in heavy update cases. If only a\nvery\n> > tiny number of tuples on the disk are expired this will reduce the\naggregate\n> > performance/space penalty of MVCC into insignificance for the majority\nof\n> > uses.\n> >\n> > Couldn't we reuse tuple and index space as soon as there are no\ntransactions\n> > that depend on the old tuple or index values. I have imagined that this\nwas\n> > always part of the long-term master plan.\n> >\n> > Couldn't we keep a list of dead tuples in shared memory and look in the\nlist\n> > first when deciding where to place new values for inserts or updates so\nwe\n> > don't have to rely on VACUUM (even a background one)? If there are\nexpired\n> > tuple slots in the list these would be used before allocating a new slot\nfrom\n> > the tuple heap.\n> >\n> > The only issue is determining the lowest transaction ID for in-process\n> > transactions which seems relatively easy to do (if it's not already done\n> > somewhere).\n> >\n> > In the normal shutdown and startup case, a tuple VACUUM could be\nperformed\n> > automatically. This would normally be very fast since there would not be\nmany\n> > tuples in the list.\n> >\n> > Index slots would be handled differently since these cannot be\nsubstituted\n> > one for another. However, these could be recovered as part of every\nindex\n> > page update. Pages would be scanned before being written and any expired\n> > slots that had transaction ID's lower than the lowest active slot would\nbe\n> > removed. This could be done for non-leaf pages as well and would result\nin\n> > only reorganizing a page that is already going to be written thereby not\n> > adding much to the overall work.\n> >\n> > I don't think that internal pages that contain pointers to values in\nnodes\n> > further down the tree that are no longer in the leaf nodes because of\nthis\n> > partial expired entry elimination will cause a problem since searches\nand\n> > scans will still work fine.\n> >\n> > Does VACUUM do something that could not be handled in this realtime\nmanner?\n> >\n> > - Curtis\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Wed, 27 Nov 2002 16:02:22 +0200", "msg_from": "\"Nicolai Tufar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "Just for the humor of it, as well as to confirm Nick's perspective, \nyears ago on our inhouse developed Burroughs mainframe dbms, we had a \nprocess called \"garbage collect\".\n\nNicolai Tufar wrote:\n\n>I always wandered if VACUUM is the right name for the porcess. Now, when\n>PostgreSQL\n>is actively challenging in Enterprise space, it might be a good idea to give\n>it a more\n>enterprise-like name. Try to think how it is looking for an outside person\n>to see\n>us, database professionals hold lenghty discussions about the ways we\n>vacuum a database. Why should you need to vacuum a database? Is it\n>dirty? In my personal opinion, something like \"space reclaiming daemon\",\n>\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n>would\n>sound more business-like .\n>\n>Regards,\n>Nick\n>\n>\n>----- Original Message -----\n>From: \"Bruce Momjian\" <[email protected]>\n>To: \"Curtis Faith\" <[email protected]>\n>Cc: \"Tom Lane\" <[email protected]>; \"Ron Johnson\" <[email protected]>;\n>\"PgSQL Performance ML\" <[email protected]>;\n><[email protected]>\n>Sent: Tuesday, November 26, 2002 9:09 PM\n>Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\n>insert/delete/update\n>\n>\n> \n>\n>>Good ideas. I think the master solution is to hook the statistics\n>>daemon information into an automatic vacuum that could _know_ which\n>>tables need attention.\n>>\n>>--------------------------------------------------------------------------\n>> \n>>\n>-\n> \n>\n>>Curtis Faith wrote:\n>> \n>>\n>>>tom lane wrote:\n>>> \n>>>\n>>>>Sure, it's just shuffling the housekeeping work from one place to\n>>>>another. The thing that I like about Postgres' approach is that we\n>>>>put the housekeeping in a background task (VACUUM) rather than in the\n>>>>critical path of foreground transaction commit.\n>>>> \n>>>>\n>>>Thinking with my marketing hat on, MVCC would be a much bigger win if\n>>> \n>>>\n>VACUUM\n> \n>\n>>>was not required (or was done automagically). The need for periodic\n>>> \n>>>\n>VACUUM\n> \n>\n>>>just gives ammunition to the PostgreSQL opponents who can claim we are\n>>>deferring work but that it amounts to the same thing.\n>>>\n>>>A fully automatic background VACUUM will significantly reduce but will\n>>> \n>>>\n>not\n> \n>\n>>>eliminate this perceived weakness.\n>>>\n>>>However, it always seemed to me there should be some way to reuse the\n>>> \n>>>\n>space\n> \n>\n>>>more dynamically and quickly than a background VACUUM thereby reducing\n>>> \n>>>\n>the\n> \n>\n>>>percentage of tuples that are expired in heavy update cases. If only a\n>>> \n>>>\n>very\n> \n>\n>>>tiny number of tuples on the disk are expired this will reduce the\n>>> \n>>>\n>aggregate\n> \n>\n>>>performance/space penalty of MVCC into insignificance for the majority\n>>> \n>>>\n>of\n> \n>\n>>>uses.\n>>>\n>>>Couldn't we reuse tuple and index space as soon as there are no\n>>> \n>>>\n>transactions\n> \n>\n>>>that depend on the old tuple or index values. I have imagined that this\n>>> \n>>>\n>was\n> \n>\n>>>always part of the long-term master plan.\n>>>\n>>>Couldn't we keep a list of dead tuples in shared memory and look in the\n>>> \n>>>\n>list\n> \n>\n>>>first when deciding where to place new values for inserts or updates so\n>>> \n>>>\n>we\n> \n>\n>>>don't have to rely on VACUUM (even a background one)? If there are\n>>> \n>>>\n>expired\n> \n>\n>>>tuple slots in the list these would be used before allocating a new slot\n>>> \n>>>\n>from\n> \n>\n>>>the tuple heap.\n>>>\n>>>The only issue is determining the lowest transaction ID for in-process\n>>>transactions which seems relatively easy to do (if it's not already done\n>>>somewhere).\n>>>\n>>>In the normal shutdown and startup case, a tuple VACUUM could be\n>>> \n>>>\n>performed\n> \n>\n>>>automatically. This would normally be very fast since there would not be\n>>> \n>>>\n>many\n> \n>\n>>>tuples in the list.\n>>>\n>>>Index slots would be handled differently since these cannot be\n>>> \n>>>\n>substituted\n> \n>\n>>>one for another. However, these could be recovered as part of every\n>>> \n>>>\n>index\n> \n>\n>>>page update. Pages would be scanned before being written and any expired\n>>>slots that had transaction ID's lower than the lowest active slot would\n>>> \n>>>\n>be\n> \n>\n>>>removed. This could be done for non-leaf pages as well and would result\n>>> \n>>>\n>in\n> \n>\n>>>only reorganizing a page that is already going to be written thereby not\n>>>adding much to the overall work.\n>>>\n>>>I don't think that internal pages that contain pointers to values in\n>>> \n>>>\n>nodes\n> \n>\n>>>further down the tree that are no longer in the leaf nodes because of\n>>> \n>>>\n>this\n> \n>\n>>>partial expired entry elimination will cause a problem since searches\n>>> \n>>>\n>and\n> \n>\n>>>scans will still work fine.\n>>>\n>>>Does VACUUM do something that could not be handled in this realtime\n>>> \n>>>\n>manner?\n> \n>\n>>>- Curtis\n>>>\n>>>\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Don't 'kill -9' the postmaster\n>>>\n>>> \n>>>\n>>--\n>> Bruce Momjian | http://candle.pha.pa.us\n>> [email protected] | (610) 359-1001\n>> + If your life is a hard drive, | 13 Roberts Road\n>> + Christ can be your backup. | Newtown Square, Pennsylvania\n>> \n>>\n>19073\n> \n>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n>> \n>>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> \n>\n\n\n\n\n", "msg_date": "Wed, 27 Nov 2002 09:43:01 -0500", "msg_from": "Jim Beckstrom <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "Or just reorg.\n\nAm Mittwoch, 27. November 2002 15:02 schrieb Nicolai Tufar:\n> I always wandered if VACUUM is the right name for the porcess. Now, when\n> PostgreSQL\n> is actively challenging in Enterprise space, it might be a good idea to\n> give it a more\n> enterprise-like name. Try to think how it is looking for an outside person\n> to see\n> us, database professionals hold lenghty discussions about the ways we\n> vacuum a database. Why should you need to vacuum a database? Is it\n> dirty? In my personal opinion, something like \"space reclaiming daemon\",\n> \"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n> would\n> sound more business-like .\n>\n> Regards,\n> Nick\n>\n>\n> ----- Original Message -----\n> From: \"Bruce Momjian\" <[email protected]>\n> To: \"Curtis Faith\" <[email protected]>\n> Cc: \"Tom Lane\" <[email protected]>; \"Ron Johnson\" <[email protected]>;\n> \"PgSQL Performance ML\" <[email protected]>;\n> <[email protected]>\n> Sent: Tuesday, November 26, 2002 9:09 PM\n> Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: performance of\n> insert/delete/update\n>\n> > Good ideas. I think the master solution is to hook the statistics\n> > daemon information into an automatic vacuum that could _know_ which\n> > tables need attention.\n> >\n> > -------------------------------------------------------------------------\n> >-\n>\n> -\n>\n> > Curtis Faith wrote:\n> > > tom lane wrote:\n> > > > Sure, it's just shuffling the housekeeping work from one place to\n> > > > another. The thing that I like about Postgres' approach is that we\n> > > > put the housekeeping in a background task (VACUUM) rather than in the\n> > > > critical path of foreground transaction commit.\n> > >\n> > > Thinking with my marketing hat on, MVCC would be a much bigger win if\n>\n> VACUUM\n>\n> > > was not required (or was done automagically). The need for periodic\n>\n> VACUUM\n>\n> > > just gives ammunition to the PostgreSQL opponents who can claim we are\n> > > deferring work but that it amounts to the same thing.\n> > >\n> > > A fully automatic background VACUUM will significantly reduce but will\n>\n> not\n>\n> > > eliminate this perceived weakness.\n> > >\n> > > However, it always seemed to me there should be some way to reuse the\n>\n> space\n>\n> > > more dynamically and quickly than a background VACUUM thereby reducing\n>\n> the\n>\n> > > percentage of tuples that are expired in heavy update cases. If only a\n>\n> very\n>\n> > > tiny number of tuples on the disk are expired this will reduce the\n>\n> aggregate\n>\n> > > performance/space penalty of MVCC into insignificance for the majority\n>\n> of\n>\n> > > uses.\n> > >\n> > > Couldn't we reuse tuple and index space as soon as there are no\n>\n> transactions\n>\n> > > that depend on the old tuple or index values. I have imagined that this\n>\n> was\n>\n> > > always part of the long-term master plan.\n> > >\n> > > Couldn't we keep a list of dead tuples in shared memory and look in the\n>\n> list\n>\n> > > first when deciding where to place new values for inserts or updates so\n>\n> we\n>\n> > > don't have to rely on VACUUM (even a background one)? If there are\n>\n> expired\n>\n> > > tuple slots in the list these would be used before allocating a new\n> > > slot\n>\n> from\n>\n> > > the tuple heap.\n> > >\n> > > The only issue is determining the lowest transaction ID for in-process\n> > > transactions which seems relatively easy to do (if it's not already\n> > > done somewhere).\n> > >\n> > > In the normal shutdown and startup case, a tuple VACUUM could be\n>\n> performed\n>\n> > > automatically. This would normally be very fast since there would not\n> > > be\n>\n> many\n>\n> > > tuples in the list.\n> > >\n> > > Index slots would be handled differently since these cannot be\n>\n> substituted\n>\n> > > one for another. However, these could be recovered as part of every\n>\n> index\n>\n> > > page update. Pages would be scanned before being written and any\n> > > expired slots that had transaction ID's lower than the lowest active\n> > > slot would\n>\n> be\n>\n> > > removed. This could be done for non-leaf pages as well and would result\n>\n> in\n>\n> > > only reorganizing a page that is already going to be written thereby\n> > > not adding much to the overall work.\n> > >\n> > > I don't think that internal pages that contain pointers to values in\n>\n> nodes\n>\n> > > further down the tree that are no longer in the leaf nodes because of\n>\n> this\n>\n> > > partial expired entry elimination will cause a problem since searches\n>\n> and\n>\n> > > scans will still work fine.\n> > >\n> > > Does VACUUM do something that could not be handled in this realtime\n>\n> manner?\n>\n> > > - Curtis\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\n> > > broadcast)--------------------------- TIP 4: Don't 'kill -9' the\n> > > postmaster\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > [email protected] | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania\n>\n> 19073\n>\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDr. Eckhardt + Partner GmbH\nhttp://www.epgmbh.de\n", "msg_date": "Wed, 27 Nov 2002 16:34:04 +0100", "msg_from": "Tommi Maekitalo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "How about OPTIMIZE?\n\neg. optimize customers\n\ninstead of analyze, could be paired with agressive\n\nso, OPTIMIZE AGREESSIVE\n\nvery much a glass half empty, half full type thing. vacuum is not a\nproblem, its a solution.\n\nMerlin\n\n\n\"\"Curtis Faith\"\" <[email protected]> wrote in message\nnews:[email protected]...\n> tom lane wrote:\n> > Sure, it's just shuffling the housekeeping work from one place to\n> > another. The thing that I like about Postgres' approach is that we\n> > put the housekeeping in a background task (VACUUM) rather than in the\n> > critical path of foreground transaction commit.\n>\n> Thinking with my marketing hat on, MVCC would be a much bigger win if\nVACUUM\n> was not required (or was done automagically). The need for periodic VACUUM\n> just gives ammunition to the PostgreSQL opponents who can claim we are\n> deferring work but that it amounts to the same thing.\n>\n> A fully automatic background VACUUM will significantly reduce but will not\n> eliminate this perceived weakness.\n>\n> However, it always seemed to me there should be some way to reuse the\nspace\n> more dynamically and quickly than a background VACUUM thereby reducing the\n> percentage of tuples that are expired in heavy update cases. If only a\nvery\n> tiny number of tuples on the disk are expired this will reduce the\naggregate\n> performance/space penalty of MVCC into insignificance for the majority of\n> uses.\n>\n> Couldn't we reuse tuple and index space as soon as there are no\ntransactions\n> that depend on the old tuple or index values. I have imagined that this\nwas\n> always part of the long-term master plan.\n>\n> Couldn't we keep a list of dead tuples in shared memory and look in the\nlist\n> first when deciding where to place new values for inserts or updates so we\n> don't have to rely on VACUUM (even a background one)? If there are expired\n> tuple slots in the list these would be used before allocating a new slot\nfrom\n> the tuple heap.\n>\n> The only issue is determining the lowest transaction ID for in-process\n> transactions which seems relatively easy to do (if it's not already done\n> somewhere).\n>\n> In the normal shutdown and startup case, a tuple VACUUM could be performed\n> automatically. This would normally be very fast since there would not be\nmany\n> tuples in the list.\n>\n> Index slots would be handled differently since these cannot be substituted\n> one for another. However, these could be recovered as part of every index\n> page update. Pages would be scanned before being written and any expired\n> slots that had transaction ID's lower than the lowest active slot would be\n> removed. This could be done for non-leaf pages as well and would result in\n> only reorganizing a page that is already going to be written thereby not\n> adding much to the overall work.\n>\n> I don't think that internal pages that contain pointers to values in nodes\n> further down the tree that are no longer in the leaf nodes because of this\n> partial expired entry elimination will cause a problem since searches and\n> scans will still work fine.\n>\n> Does VACUUM do something that could not be handled in this realtime\nmanner?\n>\n> - Curtis\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n", "msg_date": "Wed, 27 Nov 2002 11:26:30 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Realtime VACUUM,\n\twas: performance of insert/delete/update" }, { "msg_contents": "In a similar vein, setting the way back machine to the mid 80s when I was \nin the USAF and teaching the computer subsystem of the A-10 INS test \nstation, we had old reclaimed Sperry 1650 computers (the precursor to the \n1750) that had come out of the 1960 era fire control systems on \nbattleships like the Missouri and what not.\n\nWhen the OS went south, it would put up a message that said \"System Crash \nat address XXXXXXX\" or something very similar. A colonol saw that and \ninsisted that the folks who wrote the OS change the word crash, since in \nthe Air Force crash (as in plane crash) had such bad connotations. So, it \ngot changed to \"System Fault at address xxxxxxxxx\" For the first month or \ntwo that happened, folks would ask what a system fault was and what to do \nwith it. They new that a crash would need the machine to be power cycled \nbut didn't know what to do with a system fault. Shortly after that, the \nmanual for the test station had a little section added to it that \nbasically said a system fault was a crash. :-)\n\nOn Wed, 27 Nov 2002, Jim Beckstrom wrote:\n\n> Just for the humor of it, as well as to confirm Nick's perspective, \n> years ago on our inhouse developed Burroughs mainframe dbms, we had a \n> process called \"garbage collect\".\n> \n> Nicolai Tufar wrote:\n> \n> >I always wandered if VACUUM is the right name for the porcess. Now, when\n> >PostgreSQL\n> >is actively challenging in Enterprise space, it might be a good idea to give\n> >it a more\n> >enterprise-like name. Try to think how it is looking for an outside person\n> >to see\n> >us, database professionals hold lenghty discussions about the ways we\n> >vacuum a database. Why should you need to vacuum a database? Is it\n> >dirty? In my personal opinion, something like \"space reclaiming daemon\",\n> >\"free-list organizer\", \"tuple recyle job\" or \"segment coalesce process\"\n> >would\n> >sound more business-like .\n> >\n\n", "msg_date": "Wed, 27 Nov 2002 10:18:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Realtime VACUUM, was: performance of" } ]
[ { "msg_contents": "\nI have seen a number of real-world situations where bundling inserts into\ntransactions made a considerable difference - sometimes as much as a 100x\nspeed-up, and not just in Postgresql databases, but also commercial\nsystems\n(my experience is in Oracle & Sybase). I've often used an idiom of\nbuilding\nup rows until I hit some high-water mark, and then insert those rows in\none\nfell swoop - it's almost always measurably faster than one-at-a-time.\n\nSidebar: a number of years ago, while contracting at a regional telephone\ncompany,\nI became peripherally enmired in a gigantic billing-system-makeover\nfiasco.\nUpon initial deployment, the system was so slow at processing that it was\ntaking about 30 hours for each day of billing data. After a week or so,\nwhen it became apparent that fundamental Cash Flow was threatened, there\nwere\nmulti-hour conference calls, in which various VPs called for massive h/w\nupgrades and/or lawsuits against Oracle. An astute cohort of mine asked\nto\nsee some of the code, and found out that the original developers (at the\ntelco)\nhad created a bloated and slow control system in C++, using semaphores or\nsomesuch,\nto *serialize* inserts/updates/deletes, and so they had gigantic\nhome-built\nqueues of insert jobs. Not only were they not bundling updates in\ntransactions,\nthey were only ever doing one transaction at a time. (Apparently, they\nnever\nlearned RDBMS fundamentals.) He told them to rip out all that code, and\nlet\nOracle (like any other decent RDBMS) handle the update ordering. The\nresultant\nspeed-up factor was several hundred times.\n\n-R\n\n\nOn Mon, 25 Nov 2002 15:59:16 -0700 (MST), \"scott.marlowe\"\n<[email protected]> said:\n> On Mon, 25 Nov 2002, Josh Berkus wrote:\n> \n> > Scott,\n> > \n> > > It's quite easy to test if you have a database with a large table to play \n> > > with, use pg_dump to dump a table with the -d switch (makes the dump use \n> > > insert statements.) Then, make two versions of the dump, one which has a \n> > > begin;end; pair around all the inserts and one that doesn't, then use psql \n> > > -e to restore both dumps. The difference is HUGE. Around 10 to 20 times \n> > > faster with the begin end pairs. \n> > > \n> > > I'd think that anyone who's used postgresql for more than a few months \n> > > could corroborate my experience.\n> > \n> > Ouch! \n> > \n> > No need to get testy about it. \n> > \n> > Your test works as you said; the way I tried testing it before was different. \n> > Good to know. However, this approach is only useful if you are doing \n> > rapidfire updates or inserts coming off a single connection. But then it is \n> > *very* useful.\n> \n> I didn't mean that in a testy way, it's just that after you've sat\n> through \n> a fifteen minute wait while a 1000 records are inserted, you pretty \n> quickly switch to the method of inserting them all in one big \n> transaction. That's all.\n> \n> Note that the opposite is what really gets people in trouble. I've seen \n> folks inserting rather large amounts of data, say into ten or 15 tables, \n> and their web servers were crawling under parallel load. Then, they put \n> them into a single transaction and they just flew.\n> \n> The funny thing it, they've often avoided transactions because they \n> figured they'd be slower than just inserting the rows, and you kinda have \n> to make them sit down first before you show them the performance increase \n> from putting all those inserts into a single transaction.\n> \n> No offense meant, really. It's just that you seemed to really doubt that \n> putting things into one transaction helped, and putting things into one \n> big transaction if like the very first postgresql lesson a lot of \n> newcomers learn. :-)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n", "msg_date": "Mon, 25 Nov 2002 17:43:39 -0700", "msg_from": "\"Rich Scott\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of insert/delete/update" }, { "msg_contents": "On Mon, 2002-11-25 at 18:43, Rich Scott wrote:\n[snip]\n> upgrades and/or lawsuits against Oracle. An astute cohort of mine asked\n> to\n> see some of the code, and found out that the original developers (at the\n> telco)\n> had created a bloated and slow control system in C++, using semaphores or\n> somesuch,\n> to *serialize* inserts/updates/deletes, and so they had gigantic\n> home-built\n> queues of insert jobs. Not only were they not bundling updates in\n> transactions,\n> they were only ever doing one transaction at a time. (Apparently, they\n> never\n> learned RDBMS fundamentals.) He told them to rip out all that code, and\n> let\n> Oracle (like any other decent RDBMS) handle the update ordering. The\n> resultant\n> speed-up factor was several hundred times.\n\nJust goes to show the difference between RDBMSs. Even with the biggest\nAlphas possible at the time, our Customer Service Center app was\ncrawling because of lock conflicts (each transaction affected ~10-12 \ntables). Rdb/VMS works best with a TP monitor (specifically ACMS). \nWe were'nt using one, and weren't handling lock conflicts in the best\nway. Thus, the slowdowns.\n\nThe developers finally wrote a home-grown dedicated TP-like server,\nusing DEC Message Queue and /serialized/ data flow. Without all\nof the lock conflicts, performance warped forward.\n\nThe reason for this, I think, is that Oracle serializes things itself,\nwhereas Rdb/VMS expects ACMS to do the serialization. (Since both\nRdb & ACMS were both written by DEC, that's understandable I think.)\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "25 Nov 2002 19:52:33 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of insert/delete/update" } ]
[ { "msg_contents": "So, I'm trying to create a full text index as described here:\n\nhttp://techdocs.postgresql.org/techdocs/fulltextindexing.php\n\nEverything was going mostly okay...\n\nI had to hack a quick PHP script instead of using the Perl once since I\ndidn't have a working Pg.pm, but that was a minor speed bump.\n\nThen I hit a real road-block...\n\n\\copy article_fti from fulltext.sorted\n\\.\nERROR: copy: line 34635390, cannot extend article_fti: No space left on\ndevice.\n Check free disk space.\nPQendcopy: resetting connection\narchive=> \\q\n[root@rm-004-24 utilities]# df -h\nFilesystem Size Used Avail Use% Mounted on\n/dev/sda3 15G 15G 0 100% /\n/dev/sda1 48M 6.1M 39M 14% /boot\nnone 439M 0 439M 0% /dev/shm\n\nOh. Yeah. I guess that *IS* going to be kind of big...\n\nAny SWAGs how much disk space is required for a 500 M fulltext.sorted file?\n\nIE, the ASCII file of string/OID pairs, in tab-delimited form, is 500 M --\nHow much PostgreSQL space does that turn into with the tables/indices as\ndescribed the URL above?\n\nWhen I deleted all the fti rows, and did a VACUUM, there was almost 2G\navailable...\n\nALSO:\nWouldn't using f1.string = 'perth' be faster than f1.string ~ '^perth' and\nequally useful? Or is ~ with ^ somehow actually faster than the seemingly\nsimple = comparison?\n\nAND:\nWould using OR for the individual word comparisons be a big drag on speed?\n I'd kind of like to give ranked results based on how many of the terms\nwere present rather than a complete filter.\n\nI'd be happy to try the EXPLAIN queries, but I don't think they're going\nto be accurate without the data in the FTI table...\n\nI got some weird results when I did a test run with a very small dataset\nin the FTI table -- But I also think I was doing it in the middle of a\ntrain-wreck between daily VACUUM and pg_dump, which were thrashing each\nother with all the FTI data I had imported just for the small test...\n\nI've altered the cron jobs to have more time in between.\n\nTHANKS IN ADVANCE!\n\n\n\n", "msg_date": "Tue, 26 Nov 2002 01:36:59 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Full Text Index disk space requirements" } ]
[ { "msg_contents": "Hi,\n\nif memory serves me right, the space requirements for this would be \nsomething like:\n\n 42 (per tuple overhead)\n 4 (size of OID?)\n 16 (substitute with the maximum length of any 'string' in your \nfulltext.sorted)\n+ -------------\n 62\n 20,000,000 (substitute with number of lines in fulltext.sorted, \ni.e. 'wc -l fulltext.sorted')\n*---------------------------\n 1,240,000,000\n\nor about 1.2G?\n\nor \nOn 11/26/2002 01:36:59 PM typea wrote:\n> Wouldn't using f1.string = 'perth' be faster than f1.string ~ '^perth' \nand\n> equally useful? Or is ~ with ^ somehow actually faster than the \nseemingly\n> simple = comparison?\n\nf1.string = 'perth' would only match 'perth', while f1.string ~ '^perth' \nwould also match 'perthinent' (yes, I know this word does not exist).\n\nMaarten\n\nps. are you trying to use the stuf from the 'fulltextindex' directory in \ncontrib/? I originally wrote this as an experiment, and it actually turned \nout not to be fast enough for my purpose. I've never done anything with \nfull text indexing again, but I believe that currently there are better \nsolutions based on PostgreSQL (i.e. OpenFTI?)\n\n\n-------------------------------------------------------------- --\n Visit our Internet site at http://www.reuters.com\n\nGet closer to the financial markets with Reuters Messaging - for more\ninformation and to register, visit http://www.reuters.com/messaging\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.\n", "msg_date": "Tue, 26 Nov 2002 16:48:42 +0400", "msg_from": "Maarten Boekhold <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full Text Index disk space requirements" }, { "msg_contents": "> ps. are you trying to use the stuf from the 'fulltextindex' directory in\n> contrib/?\n\nYes.\n\n> I originally wrote this as an experiment, and it actually\n> turned out not to be fast enough for my purpose. I've never done\n> anything with full text indexing again, but I believe that currently\n> there are better solutions based on PostgreSQL (i.e. OpenFTI?)\n\nOh. ...\n\nIn case anybody finds these archived, it's OpenFTS:\nhttp://sourceforge.net/projects/openfts/\n\nPerhaps my question should be \"What's the best full-text-index solution?\"\n\nToo open-ended?\n\nPostgreSQL 7.1.3 (upgrading is not out of the question, but...)\n~20,000 text articles scanned with OCR from _The Bulletin of the Atomic\nScientists_ (the Doomsday Clock folks)\nAverage text length: 9635 characters\n Max text length: 278227\nOnly 2000 of the texts are null or '', and those are probably \"buglets\"\n\nAny other pertinent facts needed?\n\n\n\n", "msg_date": "Tue, 26 Nov 2002 12:13:53 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text Index disk space requirements" }, { "msg_contents": "[email protected] kirjutas K, 27.11.2002 kell 01:13:\n> > ps. are you trying to use the stuf from the 'fulltextindex' directory in\n> > contrib/?\n> \n> Yes.\n> \n> > I originally wrote this as an experiment, and it actually\n> > turned out not to be fast enough for my purpose. I've never done\n> > anything with full text indexing again, but I believe that currently\n> > there are better solutions based on PostgreSQL (i.e. OpenFTI?)\n> \n> Oh. ...\n> \n> In case anybody finds these archived, it's OpenFTS:\n> http://sourceforge.net/projects/openfts/\n> \n> Perhaps my question should be \"What's the best full-text-index solution?\"\n> \n\nYou should at least check possibilities of using\n\ncontrib/tsearch\n\nand\n\ncontrib/intarray\n\n\nIf you find out some good answers, report back to this list :)\n\n--------------\nHannu\n\n", "msg_date": "27 Nov 2002 02:33:14 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text Index disk space requirements" }, { "msg_contents": ">> Wouldn't using f1.string = 'perth' be faster than f1.string ~ '^perth'\n>>\n> and\n>> equally useful? Or is ~ with ^ somehow actually faster than the\n> seemingly\n>> simple = comparison?\n>\n> f1.string = 'perth' would only match 'perth', while f1.string ~ '^perth'\n> would also match 'perthinent' (yes, I know this word does not exist).\n\nD'oh! I figured that one out in the shower this morning. Sleep\ndeprivation, I guess...\n\nBut something is very wrong with what I've done...\n\narchive=> explain SELECT article.* FROM article , article_fti as f1,\narticle_fti as f2 WHERE TRUE AND (TRUE AND (f1.string ~ '^nuclear' AND\nf1.id = article.oid ) AND (f2.string ~ '^winter' AND f2.id =\narticle.oid ) ) ;\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=1476541.78..1492435.98 rows=77581 width=228)\n -> Merge Join (cost=740017.07..744846.55 rows=368824 width=224)\n -> Sort (cost=3492.36..3492.36 rows=17534 width=220)\n -> Seq Scan on article (cost=0.00..1067.34 rows=17534\nwidth=220)\n -> Sort (cost=736524.71..736524.71 rows=368824 width=4)\n -> Seq Scan on article_fti f2 (cost=0.00..693812.18\nrows=368824 width=4)\n -> Sort (cost=736524.71..736524.71 rows=368824 width=4)\n -> Seq Scan on article_fti f1 (cost=0.00..693812.18 rows=368824\nwidth=4)\n\nEXPLAIN\narchive=> explain select * from article where text like '%nuclear%' and\ntext like '%winter%';\nNOTICE: QUERY PLAN:\n\nSeq Scan on article (cost=0.00..1155.01 rows=1 width=216)\n\nEXPLAIN\narchive=> \\d article_fti\n Table \"article_fti\"\n Attribute | Type | Modifier\n-----------+------+----------\n string | text |\n id | oid |\nIndices: article_fti_id_index,\n article_fti_string_index\n\narchive=> \\d article\n Table \"article\"\n Attribute | Type | Modifier\n-------------------+---------+----------------------------------------------\n id | integer | not null default nextval('article_ID'::text)\n...\n text | text |\nIndices: article_id_index,\n article_oid_index,\n article_type_index\n\narchive=>\n\nI'm befuddled.\n\n\n\n", "msg_date": "Tue, 26 Nov 2002 18:40:38 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full Text Index disk space requirements" }, { "msg_contents": "I looked for a \"known bugs\" sort of database to search before bugging you\nguys, but failed to find it... But I am at least asking before I submit a\nnew bug report :-)\n\nIn version 7.1.3 on a Linux box:\n\nA particularly long, nasty query works \"just fine\" (returning seemingly\ncorrect results in about 15 seconds) until I tack on \"LIMIT 1\"\n\nAdding LIMIT 1, however, seems to make the query take an infinite amount\nof time. Well, more than 5 minutes, anyway, and I'm not that patient when\nI know it worked okay without it the LIMIT, if you know what I mean.\n\nHere is the query:\n\nSELECT DISTINCT *, 0 + 10 * (lower(title) like '%albert einstein%') ::int\n+ 10 * (lower(author_flattened) like '%albert einstein%') ::int + 30 *\n(lower(subject_flattened) like '%albert einstein%') ::int + 9 *\n(substring(lower(title), 1, 20) like '%albert%') ::int + 25 *\n(substring(lower(text), 1, 20) LIKE '%albert%') ::int + (8 * (lower(title)\nLIKE '%albert%' AND lower(title) LIKE '%einstein%' AND ((title ~*\n'albert.{0,20}einstein') OR (title ~* 'einstein.{0,20}albert'))) ::int) +\n(1 * ( (lower(title) LIKE '%albert%') )::int) + (1 * (\n(lower(author_flattened) LIKE '%albert%') )::int) + (1 * (\n(lower(subject_flattened) LIKE '%albert%') )::int) + 9 *\n(substring(lower(title), 1, 20) like '%einstein%') ::int + 25 *\n(substring(lower(text), 1, 20) LIKE '%einstein%') ::int + (8 *\n(lower(title) LIKE '%einstein%' AND lower(title) LIKE '%albert%' AND\n((title ~* 'einstein.{0,20}albert') OR (title ~*\n'albert.{0,20}einstein'))) ::int) + (1 * ( (lower(title) LIKE\n'%einstein%') )::int) + (1 * ( (lower(author_flattened) LIKE '%einstein%')\n)::int) + (1 * ( (lower(subject_flattened) LIKE '%einstein%') )::int) AS\npoints FROM article WHERE FALSE OR (lower(title) LIKE '%albert%') OR\n(lower(author_flattened) LIKE '%albert%') OR (lower(subject_flattened)\nLIKE '%albert%') OR (lower(title) LIKE '%einstein%') OR\n(lower(author_flattened) LIKE '%einstein%') OR (lower(subject_flattened)\nLIKE '%einstein%') ORDER BY points desc, volume, number, article.article\nLIMIT 1 , 1;\n\n\nexplain with or without the LIMIT part is about what you'd expect.\n\nLimit (cost=1596.50..1596.50 rows=1 width=216)\n -> Unique (cost=1596.45..1596.50 rows=1 width=216)\n -> Sort (cost=1596.45..1596.45 rows=1 width=216)\n -> Seq Scan on article (cost=0.00..1596.44 rows=1 width=216)\n\nObviously the \"Limit\" line is gone from the explain output when there is\nno LIMIT, but the other lines are all the same.\n\nIs this a known bug, is there a fix or work-around?\nIf not, should I report it, or will the first answer be \"Upgrade.\" ?\n\nThe table in question has 17,000 reords, and the various fields mentioned\nhere are all rather short -- Just author names, subject lines, and titles\nof text articles. [The articles themselves are super long, but are not\ninvolved in this query.]\n\nI can take out the ~* parts, and life is good again, so almost for sure\nthat's a critical component in the failure.\n\nps auxwwww | grep postgrs seems to report an \"idle\" postgres process for\neach failed query -- attempting to ^C the query and/or killing the idle\nprocess (I know, \"Don't\") is unfruitful.\n\nkill -9 does nuke the idle processes, IIRC, but I'm not 100% sure...\n\nI restarted the server soon after that, since (A) PHP command-line (aka\n\"CGI\") was refusing to start, complaining about \"mm\" not being loadable,\nand there was not much free RAM and the web-server was not particularly\nhappy about that state of affairs...\n\nThe schema is probably not particularly interesting -- Pretty much every\nfield involved is a 'text' field, but here you go:\n\n Table \"article\"\n Attribute | Type | Modifier\n-------------------+---------+----------------------------------------------\n id | integer | not null default nextval('article_ID'::text)\n volume | text |\n number | text |\n article | text |\n date | text |\n cover_date | text |\n title | text |\n author | text |\n author_last | text |\n author_first | text |\n subject | text |\n pages | text |\n artwork | text |\n text | text |\n type | integer |\n type_hardcoded | text |\n type_detailed | integer |\n abstract | text |\n subject_flattened | text |\n author_flattened | text |\nIndices: article_id_index,\n article_oid_index,\n article_type_index\n\nJust FYI, the _flattened fields are de-normalizing (or is it\nre-normalizing?) some relation tables so that we're not making a zillion\ntuples here, and it's just a simple (we though) short and sweet text\nsearch.\n\n\nPS Thanks for all your help on the full text index! I'm still evaluating\nsome options, but a home-brew concordance is showing the most promise. \nI'll post source/details if it works out.\n\n\n\n", "msg_date": "Fri, 13 Dec 2002 18:16:06 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "~* + LIMIT => infinite time?" }, { "msg_contents": "typea,\n\n> I looked for a \"known bugs\" sort of database to search before bugging you\n> guys, but failed to find it... But I am at least asking before I submit a\n> new bug report :-)\n> \n> In version 7.1.3 on a Linux box:\n\nYou'll get a snarky response, and then be told to upgrade, if you try to \nsubmit a bug in 7.1.3. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 13 Dec 2002 18:23:32 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> You'll get a snarky response, and then be told to upgrade, if you try to \n> submit a bug in 7.1.3. \n\n7.1 is a tad long in the tooth, but still I'm curious about this. I\ndon't see how <plan A> can possibly take longer than <plan A> + <LIMIT\nnode on top>.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Dec 2002 23:48:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time? " }, { "msg_contents": "> Josh Berkus <[email protected]> writes:\n>> You'll get a snarky response, and then be told to upgrade, if you try\n>> to submit a bug in 7.1.3.\n>\n> 7.1 is a tad long in the tooth, but still I'm curious about this. I\n> don't see how <plan A> can possibly take longer than <plan A> + <LIMIT\n> node on top>.\n\nHey Tom. I think we met very briefly at the International PHP Conference\nin Frankfurt in 2001... Anyway.\n\nIt's actually the other way around. <Plan A> takes like 4 seconds. <Plan\nA> + <LIMIT node on top> takes literally FOREVER and leaves a postgres\nprocess hanging 'round that I have to kill -9 to get rid of.\n\nI'd understand the LIMIT clause taking a bit longer, or being faster for\nstartup (if there were no ORDER BY, which there is) but I never even\nconsidered it would hang the whole thing. Actually, PostgreSQL has been\nso reliable over the years, the idea that I'd run across a bug was just\nforeign to me... So I've been trying to tune performance on this query\nfor weeks now, not realizing that the speed wasn't the issue at all. I\ncould almost rip out the LIMIT completely if the application logic let me,\nand if the performance were a bit better.\n\nIt occurred to me last night that the actual data *MIGHT* be involved --\nIt's some OCR text, and there are a few scattered non-ASCII characters\ninvolved... So *MAYBE* the actual text getting scanned could also be\nimportant.\n\nIt seems unlikely, since the non-LIMIT query returns all the data just\nfine, but just in case...\n\nHere's a schema and a full dump for anybody that wants to dig in:\nhttp://bulletinarchive.org/pg_dump/\n\nI could provide PHP source as well, or the query posted in this thread can\nserve as the test case.\n\nAt the moment, I've altered the application to not use LIMIT when I have\n~* in the query, and it \"works\" -- only the paging of results is broken,\nand the whole page takes twice as long to load as it should in those\ncases, since it's doing the same query twice and snarfing all the monster\ndata and then throwing away the majority of rows in both cases. I need\nthe first row to get the highest score, and the rows for paging in the\nreal application...\n\nAnyway, my point is that the queries seem fine without the LIMIT clause,\nand \"hang\" with both \"~*\" and LIMIT, and I've even gone so far as to\nincorporate that into the application logic for now, just to have a page\nthat loads at all instead of one that hangs.\n\nMeanwhile, I guess I should flail at it and try 7.3 in the hopes the bug\ndisappeared. I was hoping to know for sure that it was a fixed bug in\nthat upgrade path.\n\nBoss actually said we should go ahead and upgrade just on principle\nanyway. It's nice to have a smart boss. :-)\n\n\n\n", "msg_date": "Sat, 14 Dec 2002 16:41:54 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "Typea,\n\n> At the moment, I've altered the application to not use LIMIT when I\n> have\n> ~* in the query, and it \"works\" -- only the paging of results is\n> broken,\n\nWould your application allow you to use \" ILIKE '%<VALUE>%'\" in the\nquery instead of \"~*\" ? If so, does the query still hang with ILIKE\n... LIMIT?\n\n-Josh\n", "msg_date": "Sun, 15 Dec 2002 12:22:25 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "[email protected] kirjutas P, 15.12.2002 kell 05:41:\n> It occurred to me last night that the actual data *MIGHT* be involved --\n> It's some OCR text, and there are a few scattered non-ASCII characters\n> involved... So *MAYBE* the actual text getting scanned could also be\n> important.\n> \n> It seems unlikely, since the non-LIMIT query returns all the data just\n> fine, but just in case...\n\nHave you tried using DECLARE CURSOR...; FETCH 1; CLOSE CURSOR; instead\nof LIMIT ?\n\n> Here's a schema and a full dump for anybody that wants to dig in:\n> http://bulletinarchive.org/pg_dump/\n\ngzipping the data could make sense - data.sql goes from 200M to 60M ;)\n\n> I could provide PHP source as well, or the query posted in this thread can\n> serve as the test case.\n> \n> At the moment, I've altered the application to not use LIMIT when I have\n> ~* in the query, and it \"works\" -- only the paging of results is broken,\n> and the whole page takes twice as long to load as it should in those\n> cases, since it's doing the same query twice and snarfing all the monster\n> data and then throwing away the majority of rows in both cases. I need\n> the first row to get the highest score, and the rows for paging in the\n> real application...\n> \n> Anyway, my point is that the queries seem fine without the LIMIT clause,\n> and \"hang\" with both \"~*\" and LIMIT, and I've even gone so far as to\n> incorporate that into the application logic for now, just to have a page\n> that loads at all instead of one that hangs.\n> \n> Meanwhile, I guess I should flail at it and try 7.3 in the hopes the bug\n> disappeared.\n\nI tested (part of) it on 7.3 , had to manually change ::int to\ncase-when-then-else-end as there is no cast from bool to int in7.3\n\nThis ran fine:\n\nSELECT DISTINCT \n *,\n 0 + case when (title ilike '%albert einstein%') then 10 else 0 end\n + case when ( title iLIKE '%einstein%'\n AND title iLIKE '%albert%'\n AND ( (title ~* 'einstein.{0,20}albert')\n OR (title ~* 'albert.{0,20}einstein'))) then 8 else 0\nend\n as points\n FROM article\nWHERE FALSE\n OR (title iLIKE '%albert%')\n OR (author_flattened iLIKE '%albert%')\n OR (subject_flattened iLIKE '%albert%')\n OR (title iLIKE '%einstein%')\n OR (author_flattened iLIKE '%einstein%')\n OR (subject_flattened iLIKE '%einstein%')\nORDER BY points desc, volume, number, article.article\nLIMIT 1 OFFSET 1;\n\nI also changed \n \"lower(field) like '%albert%'\"\n to\n \"field ilike '%albert%'\"\n\nand got about 20% speed boost - EXPLAIN ANALYZE reported 0.189 insead of\n0.263 sec as actual time.\n\n> I was hoping to know for sure that it was a fixed bug in\n> that upgrade path.\n> \n> Boss actually said we should go ahead and upgrade just on principle\n> anyway. It's nice to have a smart boss. :-)\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "16 Dec 2002 05:03:54 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "> Have you tried using DECLARE CURSOR...; FETCH 1; CLOSE CURSOR; instead\n> of LIMIT ?\n\nI think I did, in the monitory, and it worked fine.\n\n> I tested (part of) it on 7.3 , had to manually change ::int to\n> case-when-then-else-end as there is no cast from bool to int in7.3\n\nAn upgrade to 7.3 has, in fact, gotten rid of that bug...\n\nThough now I'm forced to use localhost for connecting, since:\nA) Upon boot, I'm told I can't use password or crypt, but\nB) When trying to connect, I can't use md5\nC) the passwords get turned into md5 whether I like it or not\nWhat's up with all that?\n\nI also don't understand why the incredibly useful function I had to\nauto-typecast from bool to int won't work using ::int syntax, but will if\nI use int4(...) syntax. Grrr.\n\nAnd breaking the LIMIT x, y thing was annoying.\n\nOh well. I can move forward with some changes in the way we do things.\n\nNow that the query runs, I can start in on the optimization again :-)\n\nTHANKS ALL!!!\n\nOh, and the lower(field) LIKE is MySQL compatible, but I don't think MySQL\nhas an ILIKE... We're abandoning the MySQL support now anyway, since we\nNEED performance way more than we need MySQL compatibility.\n\nThanks again!\n\n\n\n", "msg_date": "Mon, 16 Dec 2002 00:21:14 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": ">> I tested (part of) it on 7.3 , had to manually change ::int to\n>> case-when-then-else-end as there is no cast from bool to int in7.3\n>\n> An upgrade to 7.3 has, in fact, gotten rid of that bug...\n\nDamn. I spoke to soon. It *SEEMS* like it's back again. Very, very\nstrange.\n\nIf explain claims the \"cost\" will be ~1000, and then a query takes SO long\nto return I give up and hit ^C, that's just not right, right? I mean,\nthat \"cost\" near 1000 may not be in seconds or anything, but 1000 is\npretty low, isn't it?\n\nI give up for now. Need sleep.\n\n\n\n\n", "msg_date": "Mon, 16 Dec 2002 02:20:27 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "Typea,\n\n> Oh, and the lower(field) LIKE is MySQL compatible, but I don't think\n> MySQL\n> has an ILIKE... We're abandoning the MySQL support now anyway, since\n> we\n> NEED performance way more than we need MySQL compatibility.\n\nILIKE is SQL-spec. There's reasons to use any:\n\nILIKE is slightly faster on un-anchored text searches (\"name ILIKE\n'%john%'\")\n\nlower(column) can be indexed for anchored text searches (\"lower(name)\nLIKE 'john%'\")\n\n\"~*\" cannot be indexed, but will accept regexp operators for\nsophisticated text searches (\"name ~* 'jo[han]n?'\")\n\n-Josh Berkus\n", "msg_date": "Mon, 16 Dec 2002 10:26:24 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time?" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n> ILIKE is SQL-spec.\n\nIt is? I don't see it in there ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Dec 2002 13:34:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ~* + LIMIT => infinite time? " } ]
[ { "msg_contents": "Hi all,\n\nI have developed a system that is composed of 10 separate running \nprocesses. Each of the processes opens a connection to postgres with the \nC interface.\n\nWhenever any process has to process any query, it should fork and create \na new independent process (not thread). How does the database connection \nis kept?\n\nIn other words, if I fork a process that have an opened postgres \nconnection, does its children inherit that connection, and can I use \nthem as is, closing them at the end, or do I have to open a new \nconnection foreach children process ?\n\nThanks in advance,\n\nThrasher\n\n", "msg_date": "Wed, 27 Nov 2002 15:52:20 +0100", "msg_from": "Thrasher <[email protected]>", "msg_from_op": true, "msg_subject": "Child process procedures" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Nicolai Tufar [mailto:[email protected]] \n> Sent: 27 November 2002 14:02\n> To: [email protected]; PgSQL Performance ML\n> Subject: Re: [PERFORM] [HACKERS] Realtime VACUUM, was: \n> performance of insert/delete/update\n> \n> \n> I always wandered if VACUUM is the right name for the \n> porcess. Now, when PostgreSQL is actively challenging in \n> Enterprise space, it might be a good idea to give it a more \n> enterprise-like name. Try to think how it is looking for an \n> outside person to see us, database professionals hold lenghty \n> discussions about the ways we vacuum a database. Why should \n> you need to vacuum a database? Is it dirty? In my personal \n> opinion, something like \"space reclaiming daemon\", \"free-list \n> organizer\", \"tuple recyle job\" or \"segment coalesce process\" \n> would sound more business-like .\n\nAs inspired by the SQL Server Enterprise Manager I've just been swearing\nat:\n\n\"Database Optimizer\"\n\nRegards, Dave.\n", "msg_date": "Wed, 27 Nov 2002 15:09:59 -0000", "msg_from": "\"Dave Page\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Realtime VACUUM,\n\twas: performance of insert/delete/update" } ]
[ { "msg_contents": "In our first installment a couple of weeks ago, I was asking about low end hardware optimizations, it got into ide/scsi, memory, and drive layout issues.\n\nI've been wondering more about the IDE/SCSI difference for low end hardware, and since my dev worksatation needed more hard drive space, I have a good opportunity to aquire hardware and run some benchmarks. \n\nThe machine:\n\nSawtooth g4/400, X 10.1.5, PG 7.2.1 from entropy.ch's packages.\nIDE: udma66 controller, ibm 7200rpm 15 gig deskstar. On it's own controller on the motherboard. This is the system drive.\nSCSI: Ultra160 ATTO Apple OEM PCI controller, Ultra320 cable, IBM 10k rpm 18 gig Ultrastar drive. Total scsi chain price = $140. \n\npgbench was run from a machine (debian woody) on the local net segment that could actually compile pgbench. \n\nThe IDE drive is about 2 years old, but was one of the fastest at the time. The SCSI drive is new but of inexpensive provenance. Essentially, roughly what I can afford if I'm doing a raid setup.\n\nMy gut feeling is that this is stacked against the IDE drive. It's older lower rpm technology, and it has the system and pg binaries on it. The ide system in OSX probably has more development time behind it than scsi.\n\nHowever, the results say something a little different. \n\nRunning pgbench with: scaling factor=1, # transactions = 100, and #clients =1,2,3,5,10,15 The only difference that was more than the scatter between runs was at 15 clients, and the SCSI system was marginally better. (diff of 1-2 tps at ~ 60 sustained)\n\nRoughly, I'm seeing the following performance\n\nclients SCSI IDE (tps)\n1 83 84\n2 83 83\n3 79 79\n5 77 76\n10 73 73\n15 66 64\n\nI'm enclined to think that the bottleneck is elsewhere for this system and this benchmark, but I'm not sure where. Probably processor or bandwidth to memory. \n\nMy questions from this excercise are:\n\n1) do these seem like reasonable values, or would you have expected a bigger difference. \n2) Is pgbench the proper test? It's not my workload, but it's also easily replicated at other sites. \n3) Does running remotely make a difference?\n\neric\n\n\n\n", "msg_date": "Wed, 27 Nov 2002 10:51:02 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Low Budget Performance, Part 2" }, { "msg_contents": "eric soroos <[email protected]> writes:\n> Running pgbench with: scaling factor=1, # transactions = 100, and\n> #clients =1,2,3,5,10,15\n\nThe scaling factor has to at least equal the max # of clients you intend\nto test, else pgbench will spend most of its time fighting update\ncontention (parallel transactions wanting to update the same row).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Nov 2002 14:19:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2 " }, { "msg_contents": "On Wed, 27 Nov 2002 14:19:22 -0500 in message <[email protected]>, Tom Lane <[email protected]> wrote:\n> eric soroos <[email protected]> writes:\n> > Running pgbench with: scaling factor=1, # transactions = 100, and\n> > #clients =1,2,3,5,10,15\n> \n> The scaling factor has to at least equal the max # of clients you intend\n> to test, else pgbench will spend most of its time fighting update\n> contention (parallel transactions wanting to update the same row).\n> \n\nOk, with the scaling factor set at 20, the new results are more in line with expectations:\n\nFor 1-10 clients, IDE gets 25-30 tps, SCSI 40-50 (more with more clients, roughly linear).\n\nThe CPU was hardly working in these runs (~50% on scsi, ~20% on ide), vs nearly 100% on the previous run. \n\nI'm suspect that the previous runs were colored by having the entire dataset in memory as well as the update contention. \n\neric\n\n\n\n", "msg_date": "Wed, 27 Nov 2002 12:45:39 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "On Wednesday 27 Nov 2002 8:45 pm, eric soroos wrote:\n> Ok, with the scaling factor set at 20, the new results are more in line\n> with expectations:\n>\n> For 1-10 clients, IDE gets 25-30 tps, SCSI 40-50 (more with more clients,\n> roughly linear).\n>\n> The CPU was hardly working in these runs (~50% on scsi, ~20% on ide), vs\n> nearly 100% on the previous run.\n>\n> I'm suspect that the previous runs were colored by having the entire\n> dataset in memory as well as the update contention.\n\nA run of vmstat while the test is in progress might well show what's affecting \nperformance here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Nov 2002 09:31:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "On Wed, 2002-11-27 at 14:45, eric soroos wrote:\n> On Wed, 27 Nov 2002 14:19:22 -0500 in message <[email protected]>, Tom Lane <[email protected]> wrote:\n> > eric soroos <[email protected]> writes:\n> > > Running pgbench with: scaling factor=1, # transactions = 100, and\n> > > #clients =1,2,3,5,10,15\n> > \n> > The scaling factor has to at least equal the max # of clients you intend\n> > to test, else pgbench will spend most of its time fighting update\n> > contention (parallel transactions wanting to update the same row).\n> > \n> \n> Ok, with the scaling factor set at 20, the new results are more in line with \n> expectations:\n> \n> For 1-10 clients, IDE gets 25-30 tps, SCSI 40-50 (more with more clients, \n> roughly linear).\n> \n> The CPU was hardly working in these runs (~50% on scsi, ~20% on ide), vs nearly \n> 100% on the previous run. \n\nGoing back to the OP, you think the CPU load is so high when using SCSI\nbecause of underperforming APPLE drivers?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "28 Nov 2002 06:05:17 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "Ron Johnson wrote:\n> \n> On Wed, 2002-11-27 at 14:45, eric soroos wrote:\n<snip>\n> > The CPU was hardly working in these runs (~50% on scsi, ~20% on ide), vs nearly\n> > 100% on the previous run.\n> \n> Going back to the OP, you think the CPU load is so high when using SCSI\n> because of underperforming APPLE drivers?\n\nHmmm..... Eric, have you tuned PostgreSQL's memory buffers at all?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n<snip>\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 28 Nov 2002 23:56:45 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "> > I'm suspect that the previous runs were colored by having the entire\n> > dataset in memory as well as the update contention.\n>\n> A run of vmstat while the test is in progress might well show what's affecting\n> performance here.\n\nUnfortunately, vmstat on OSX is not what it is on Linux.\n\nvm_stat on osx gives virtual memory stats, but not disk io or cpu load.\niostat looks promising, but is a noop.\nTop takes 10 % of processor.\n\neric\n\n\n", "msg_date": "Thu, 28 Nov 2002 10:11:19 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "\n> > \n> > For 1-10 clients, IDE gets 25-30 tps, SCSI 40-50 (more with more clients, \n> > roughly linear).\n> > \n> > The CPU was hardly working in these runs (~50% on scsi, ~20% on ide), vs nearly \n> > 100% on the previous run. \n> \n> Going back to the OP, you think the CPU load is so high when using SCSI\n> because of underperforming APPLE drivers?\n\nI think it's a combination of one significant digit for cpu load and more transactions on the scsi system. I'm concluding that since the processor wasn't redlined, the bottleneck is somewhere else. Given the heavily transactional nature of these tests, it's reasonable to assume that the bottleneck is the disk. \n\n10 tps= 600 transactions per minute, so for the scsi drive, I'm seeing 3k transactions / 10k revolutions, for a 30% 'saturation'. For the ide, I'm seeing 1800/7200 = 25% 'saturation'. \n\nThe rotational speed difference is 40% (10k/7.2k), and the TPS difference is about 60% (50/30 or 40/25)\n\nSo, my analysis here is that 2/3 of the difference in transaction speed can be attributed to rotational speed. It appears that the scsi architecture is also somewhat more efficient as well, allowing for a further 20% increase (over baseline) in tps.\n\nA test with a 7.2k rpm scsi drive would be instructive, as it would remove the rotational difference from the equation. As the budget for this is $0, donations will be accepted.\n\neric\n\n\n\n", "msg_date": "Thu, 28 Nov 2002 10:32:46 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "\n> > Going back to the OP, you think the CPU load is so high when using SCSI\n> > because of underperforming APPLE drivers?\n> \n> Hmmm..... Eric, have you tuned PostgreSQL's memory buffers at all?\n> \n\nShared memory, buffers, and sort memory have been boosted as well as the number of clients. \n\nThe tuning that I've done is for my app, not for pgbench. \n\neric\n\n\n\n\n", "msg_date": "Thu, 28 Nov 2002 10:39:13 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "Hi,\n\nSpeaking of which, what is the recommended optimum setting for\nmemory buffers?\n\nThanks,\n\nL.\nOn Thu, 28 Nov 2002, Justin Clift wrote:\n> \n> Hmmm..... Eric, have you tuned PostgreSQL's memory buffers at all?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> <snip>\n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nMy other vehicle is my imagination.\n - bumper sticker\n\n", "msg_date": "Sat, 30 Nov 2002 08:40:04 -0800 (PST)", "msg_from": "Laurette Cisneros <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "Laurette Cisneros wrote:\n> \n> Hi,\n> \n> Speaking of which, what is the recommended optimum setting for\n> memory buffers?\n\nHi Laurette,\n\nIt depends on how much memory you have, how big your database is, the\ntypes of queries, expected number of clients, etc.\n\nIt's just that the default settings commonly cause non-optimal\nperformance and massive CPU utilisation, so I was wondering.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Thanks,\n> \n> L.\n> On Thu, 28 Nov 2002, Justin Clift wrote:\n> >\n> > Hmmm..... Eric, have you tuned PostgreSQL's memory buffers at all?\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> >\n> > <snip>\n> >\n> >\n> \n> --\n> Laurette Cisneros\n> The Database Group\n> (510) 420-3137\n> NextBus Information Systems, Inc.\n> www.nextbus.com\n> ----------------------------------\n> My other vehicle is my imagination.\n> - bumper sticker\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 01 Dec 2002 06:47:45 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" }, { "msg_contents": "On Thu, 28 Nov 2002, eric soroos wrote:\n\n> The rotational speed difference is 40% (10k/7.2k), and the TPS \n> difference is about 60% (50/30 or 40/25)\n\nI would suggest that areal density / xfer rate off the platters is the \nREAL issue, not rotational speed. Rotational speed really only has a \nsmall effect on the wait time for the heads to get in position, whereas \nxfer rate off the platters is much more important.\n\nMy older 7200RPM 2Gig and 4Gig UW SCSI drives are no match for my more \nmodern 40 Gig 5400 RPM IDE drive, which has much higher areal density and \nxfer rate off the platters. While it may not spin as fast, the bits / \ncm2 are MUCH higher on that drive, and I can get around 15 megs a second \noff of it with bonnie++. The older 4 gig UW drives can hardly break 5 \nMegs a second xfer rate.\n\nOf course, on the drives you're testing, it is quite likely that the xfer \nrate on the 10k rpm drives are noticeably higher than the xfer rate on \nthe 7200 rpm IDE drives, so that is likely the reason for the better \nperformance.\n\n", "msg_date": "Wed, 4 Dec 2002 13:28:46 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Budget Performance, Part 2" } ]
[ { "msg_contents": "[I hope job postings are kosher...]\n\nI need help optimizing a PostgreSQL application:\n\nFull-text search\n~17,000 records\nArticles (text) are about 10K long on average, ranging from 0 to 278K.\n\nI don't know if we need to throw more RAM, more hard drive, more\ncomparison RAM in postmaster.conf or build a concordance or if this is\njust not something that can be done within our budget.\n\nI can't even seem to get the PostgreSQL profiling output using \"-s\" in the\nstartup of postmaster and client to determine what the db engine is doing.\n\nI don't understand why PostgreSQL sometimes chooses not to use the\nexisting INDEXes to do an index scan instead of sequential scan -- Does it\nreally think sequential will be faster, or does it eliminate an index scan\nbecause there won't be enough hard drive or swap space to do it?\n\nCurrently, full text search queries take on the order of 2 minutes to\nexecute.\nWe need them to be happening in 5 seconds, if at all possible.\n\nUnfortunately, this needs to happen EARLY THIS WEEK, if at all possible.\n\nContact me off-list with some idea of price/availability/references if you\nare interested in taking on this task.\n\nTHANKS!\n\n\n\n", "msg_date": "Mon, 2 Dec 2002 10:30:50 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "> I don't understand why PostgreSQL sometimes chooses not to use the\n> existing INDEXes to do an index scan instead of sequential scan -- Does it\n> really think sequential will be faster, or does it eliminate an index scan\n\nYes, and it's generally right.\n\n> because there won't be enough hard drive or swap space to do it?\n\nNope. Simply because of time it takes to read from the disk. An index\nscan makes ~ 1 read per tuple and sequential scans make one per page\n(gross simplification).\n\n> Currently, full text search queries take on the order of 2 minutes to\n> execute.\n> We need them to be happening in 5 seconds, if at all possible.\n\nHow about a couple of explains of the queries. What kind of tuning have\nyou done in postgresql.conf. Whats your hardware like? Have you\npartitioned the data to separate disks in any way?\n\nAre you doing mostly (all?) reads? Some writes? Perhaps clustering?\n\nIs this on 7.2 or 7.3? What is the Locale? C or en_US or something\nelse?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "02 Dec 2002 14:20:38 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": ">> I don't understand why PostgreSQL sometimes chooses not to use the\n>> existing INDEXes to do an index scan instead of sequential scan --\n>> Does it really think sequential will be faster, or does it eliminate\n>> an index scan\n>\n> Yes, and it's generally right.\n>\n>> because there won't be enough hard drive or swap space to do it?\n>\n> Nope. Simply because of time it takes to read from the disk. An index\n> scan makes ~ 1 read per tuple and sequential scans make one per page\n> (gross simplification).\n\nHmmm. An \"index\" is apparently nothing like I expected it to be...\n\nHere I thought it would be some quick hash-table small data-set lookup\nwith a reference to the OID -- and that most of the hash tables could just\nbe loaded in one fell swoop.\n\nOh well.\n\n>> Currently, full text search queries take on the order of 2 minutes to\n>> execute.\n>> We need them to be happening in 5 seconds, if at all possible.\n>\n> How about a couple of explains of the queries.\n\nExplains were posted previously, but I'll do a couple more.\n\nAt its simplest, this takes 30 seconds:\n\nexplain select article.* from article where lower(text) like '%einstein%';\nNOTICE: QUERY PLAN:\n\nSeq Scan on article (cost=0.00..1155.01 rows=1 width=216)\n\nOr, slightly more complex:\n\nexplain SELECT DISTINCT *, 0 + (0 + 10 * (lower(title) like '%einstein%')\n::int + 10 * (lower(author_flattened) like '%einstein%') ::int + 30 *\n(lower(subject_flattened) like '%einstein%') ::int + 30 * (lower(text)\nLIKE '%einstein%') ::int + 9 * (substring(lower(title), 1, 20) like\n'%einstein%') ::int + 25 * (substring(lower(text), 1, 20) LIKE\n'%einstein%') ::int ) AS points FROM article WHERE TRUE AND (FALSE OR\n(lower(title) like '%einstein%') OR (lower(author_flattened) like\n'%einstein%') OR (lower(subject_flattened) like '%einstein%') OR\n(lower(text) LIKE '%einstein%') ) ORDER BY points desc, volume, number,\narticle.article LIMIT 10, 0;\nNOTICE: QUERY PLAN:\n\nLimit (cost=1418.03..1418.08 rows=1 width=216)\n -> Unique (cost=1418.03..1418.08 rows=1 width=216)\n -> Sort (cost=1418.03..1418.03 rows=1 width=216)\n -> Seq Scan on article (cost=0.00..1418.02 rows=1 width=216)\n\n\n> What kind of tuning have\n> you done in postgresql.conf.\n\nNone. Never really understood what that one memory setting would affect...\n\nAnd the rest of the options seemed to be about logging output (which I\nalso can't seem to crank up to the level of getting query analysis out).\n\nI RTFM, but actually comprehending what was written ... :-^\n\n> Whats your hardware like?\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 11\nmodel name : Intel(R) Pentium(R) III CPU family 1400MHz\nstepping : 1\ncpu MHz : 1406.005\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca\ncmov pat pse36 mmx fxsr sse\nbogomips : 2804.94\n\n total: used: free: shared: buffers: cached:\nMem: 921235456 736669696 184565760 749568 75321344 592257024\nSwap: 2097143808 15368192 2081775616\nMemTotal: 899644 kB\nMemFree: 180240 kB\nMemShared: 732 kB\nBuffers: 73556 kB\nCached: 573896 kB\nSwapCached: 4480 kB\nActive: 433776 kB\nInact_dirty: 182208 kB\nInact_clean: 36680 kB\nInact_target: 229376 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 899644 kB\nLowFree: 180240 kB\nSwapTotal: 2047992 kB\nSwapFree: 2032984 kB\n\n\n\n> Have you\n> partitioned the data to separate disks in any way?\n\nNo, except when attempting to do the PostgreSQL contrib/fulltextindex we \nclustered the _fti table by loading it in word order.\n\n> Are you doing mostly (all?) reads? Some writes? Perhaps clustering?\n\nMostly reads.\nSome writes by:\n Admin fixing typos, adding new articles\n Nightly cron jobs to \"flatten\" large-scale JOINs into text contatenations\n (We could get rid of that and go back to the JOINs, now that we've\nfigured out that it's really the full text search that's killing us, not\nthe JOINs)\n\n> Is this on 7.2 or 7.3?\n\n7.1.3\n\n> What is the Locale? C or en_US or something\n> else?\n\nAFAIK, we didn't do anything to alter the locale from whatever the default\nwould be...\n\n\n\n", "msg_date": "Mon, 2 Dec 2002 12:45:43 -0800 (PST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "[email protected] kirjutas T, 03.12.2002 kell 01:45:\n> Explains were posted previously, but I'll do a couple more.\n> \n> At its simplest, this takes 30 seconds:\n> \n> explain select article.* from article where lower(text) like '%einstein%';\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on article (cost=0.00..1155.01 rows=1 width=216)\n\nsearches with LIKE use indexes only when the like expression starts \nwith a string (like 'einstein%') and even then only if in C locale.\n\nYou should check out some real full-text index add-ons, like contrib/tsearch or \nconstruct your own using your imagination plus contrib/intarray and contrib/intagg :)\n\n---------------\nHannu\n\n\n\n\n\n", "msg_date": "03 Dec 2002 02:18:14 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Mon, 2002-12-02 at 12:30, [email protected] wrote:\n> [I hope job postings are kosher...]\n> \n> I need help optimizing a PostgreSQL application:\n> \n> Full-text search\n> ~17,000 records\n> Articles (text) are about 10K long on average, ranging from 0 to 278K.\n> \n> I don't know if we need to throw more RAM, more hard drive, more\n> comparison RAM in postmaster.conf or build a concordance or if this is\n> just not something that can be done within our budget.\n> \n> I can't even seem to get the PostgreSQL profiling output using \"-s\" in the\n> startup of postmaster and client to determine what the db engine is doing.\n> \n> I don't understand why PostgreSQL sometimes chooses not to use the\n> existing INDEXes to do an index scan instead of sequential scan -- Does it\n> really think sequential will be faster, or does it eliminate an index scan\n> because there won't be enough hard drive or swap space to do it?\n> \n> Currently, full text search queries take on the order of 2 minutes to\n> execute.\n> We need them to be happening in 5 seconds, if at all possible.\n> \n> Unfortunately, this needs to happen EARLY THIS WEEK, if at all possible.\n> \n> Contact me off-list with some idea of price/availability/references if you\n> are interested in taking on this task.\n\nAfter reading the thread to see that your box has what looks like\n1GB RAM, and firing up bc(1) to see that 17K articles each of \nwhich is ~10KB == 166MB, it seems to this simple mind that given\nenough buffers, you could suck all of the articles into the\nbuffers. Thus, no more disk IO, but boy would it burn up the CPU!\n\nAlso, I think that I might write some sort of \"book index pre-processor\"\nto run against each article, to create, for each article, a list of \nwords plus byte offsets. (Some tweaking would have to occur in order\nto handle capitalization vagaries. Probably capitalize all \"index\nwords\".) (Yes, this method has the limitation of [sub-]word searches\ninstead of arbitrary string searches, \n\nThen, insert all that data into a 3rd table (T_LOOKUP) whose structure\nis:\n val\t\tTEXT\t(primary key)\n article_name\tTEXT\n byte_offset\tINTEGER\n\nThen, 'EINSTEIN%' queries would go against T_LOOKUP instead of the\narticles table.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "02 Dec 2002 21:56:04 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nfor the maximum number of tables in a database.\n\nI'm thinking about separating a table with up to millions of rows into\nseveral tables with the same set of columns to speed up some complex\nqueries. As the size of the original table is increasing fast, I want\nto get it separated once the size grows up to a limit. So there\nwill be a large amount of tables (having same structure) in a database. Is\nthere any potential performance problem with this design?\n\nThanks.\n\nLi Li\n\n\n", "msg_date": "Mon, 2 Dec 2002 21:46:43 -0800 (PST)", "msg_from": "li li <[email protected]>", "msg_from_op": false, "msg_subject": "Is there any limitations" }, { "msg_contents": "On Mon, Dec 02, 2002 at 09:46:43PM -0800, li li wrote:\n> \n> for the maximum number of tables in a database.\n\n<http://www.ca.postgresql.org/users-lounge/limitations.html>\n\nFor practical purposes, probably not.\n\n> to get it separated once the size grows up to a limit. So there\n> will be a large amount of tables (having same structure) in a database. Is\n> there any potential performance problem with this design?\n\nIt depends on what you're going to do. If the idea is to join across\nthe tables, it'll probably perform worse than just ahving a large\ntable. OTOH, if what you're doing is, say, archiving from time to\ntime, it doesn't seem unreasonable.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 3 Dec 2002 07:34:04 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any limitations" }, { "msg_contents": "\n>\n> It depends on what you're going to do. If the idea is to join across\n> the tables, it'll probably perform worse than just ahving a large\n> table. OTOH, if what you're doing is, say, archiving from time to\n> time, it doesn't seem unreasonable.\n>\nThe purpose for this design is to avoid record lookup in a huge table.\nI expect to see the query results in, say, one minute, by searching a much\nsmaller table (not join across multiple tables).\n\nThanks and regards.\n\nLi Li\n\n", "msg_date": "Tue, 3 Dec 2002 11:49:03 -0800 (PST)", "msg_from": "li li <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any limitations" }, { "msg_contents": "On Tuesday 03 Dec 2002 7:49 pm, li li wrote:\n> > It depends on what you're going to do. If the idea is to join across\n> > the tables, it'll probably perform worse than just ahving a large\n> > table. OTOH, if what you're doing is, say, archiving from time to\n> > time, it doesn't seem unreasonable.\n>\n> The purpose for this design is to avoid record lookup in a huge table.\n> I expect to see the query results in, say, one minute, by searching a much\n> smaller table (not join across multiple tables).\n>\n> Thanks and regards.\n\nIf you only want *most* queries to finish in one minute - I've used two tables \nin the past. One for recent info (which is what most of my users wanted) and \none for older info (which only got accessed rarely). You're only union-ing \ntwo tables then and you can cluster the older table as mentioned elsewhere.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 4 Dec 2002 09:29:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any limitations" }, { "msg_contents": "On Wed, 2002-12-04 at 09:29, Richard Huxton wrote:\n> On Tuesday 03 Dec 2002 7:49 pm, li li wrote:\n> > > It depends on what you're going to do. If the idea is to join across\n> > > the tables, it'll probably perform worse than just ahving a large\n> > > table. OTOH, if what you're doing is, say, archiving from time to\n> > > time, it doesn't seem unreasonable.\n> >\n> > The purpose for this design is to avoid record lookup in a huge table.\n> > I expect to see the query results in, say, one minute, by searching a much\n> > smaller table (not join across multiple tables).\n> >\n> > Thanks and regards.\n> \n> If you only want *most* queries to finish in one minute - I've used two tables \n> in the past. One for recent info (which is what most of my users wanted) and \n> one for older info (which only got accessed rarely). You're only union-ing \n> two tables then and you can cluster the older table as mentioned elsewhere.\n\nANother approach could be to have index on timestamp field (which should\nbe naturally clustered) and search in recent data only.\n\nIf the problem is simply too much data returned, you could use LIMIT.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "04 Dec 2002 12:23:41 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any limitations" } ]
[ { "msg_contents": "(Should probably be in [SQL] by now....)\n\nI've changed my table declarations to agree on the datatypes and only one \nsimular problem with an update-query doesn't seem to be solved.\n\n(see plan below)\n\n* the concatenation in the lbar select can't be avoided, it's just the way the \ndata is => this does result in a resulting type 'text', AFAIK\n\n* the aux_address.old_id is also of type 'text'\n\n\nStill, the planner does a nested loop here against large costs... ;(\n\n\nAny hints on this (last) one....?\n\n\n\nTIA,\n\n\n\n\n\nFrank.\n\n\n\ntrial=# explain update address set region_id = lbar.region_id from\n\t(select debtor_id || '-' || address_seqnr as id, region_id from\n\t\tlist_base_regions) as lbar, aux_address aa \n\twhere lbar.id = aa.old_id and address.id = aa.id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1.07..65.50 rows=3 width=253)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Nested Loop (cost=0.00..643707.03 rows=3980 width=28)\n Join Filter: ((((\"inner\".debtor_id)::text || '-'::text) || \n(\"inner\".address_seqnr)::text) = \"outer\".old_id)\n -> Index Scan using aux_address_idx2 on aux_address aa \n(cost=0.00..81.88 rows=3989 width=16)\n -> Seq Scan on list_base_regions (cost=0.00..71.80 rows=3980 \nwidth=12)\n -> Sort (cost=1.07..1.08 rows=3 width=225)\n Sort Key: address.id\n -> Seq Scan on address (cost=0.00..1.05 rows=3 width=225)\n Filter: ((id = 1) IS NOT TRUE)\n(10 rows)\n\n", "msg_date": "Tue, 3 Dec 2002 00:51:03 +0100", "msg_from": "\"ir. F.T.M. van Vugt bc.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n UNION" }, { "msg_contents": "\"ir. F.T.M. van Vugt bc.\" <[email protected]> writes:\n> Any hints on this (last) one....?\n\n> -> Nested Loop (cost=0.00..643707.03 rows=3980 width=28)\n> Join Filter: ((((\"inner\".debtor_id)::text || '-'::text) || \n> (\"inner\".address_seqnr)::text) = \"outer\".old_id)\n\nLooks to me like debtor_id and address_seqnr are not text type, but are\nbeing compared to things that are text. Hard to tell exactly what's\ngoing on though --- I suppose this query is getting rewritten by a rule?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Dec 2002 00:59:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n\tUNION" }, { "msg_contents": "> > Any hints on this (last) one....?\n> > -> Nested Loop (cost=0.00..643707.03 rows=3980 width=28)\n> > Join Filter: ((((\"inner\".debtor_id)::text || '-'::text) ||\n> > (\"inner\".address_seqnr)::text) = \"outer\".old_id)\n>\n> Looks to me like debtor_id and address_seqnr are not text type, but are\n> being compared to things that are text. \n\nThey were coerced, yes, but changing those original types helps only so much:\n\n* lbar.debtor_id is of type text\n* lbar.address_seqnr is of type text\n* aa.old_id is of type text\n\ntrial=# explain update address set region_id = lbar.region_id from \n\t(select debtor_id || '-' || address_seqnr as f_id, region_id from\n\t\tlist_base_regions) as lbar, aux_address aa\n\t\twhere lbar.f_id = aa.old_id and address.id = aa.id;\n\n\nSince the left side of the join clause is composed out of three concatenated \ntext-parts resulting in one single piece of type text, I'd expect the planner \nto avoid the nested loop. Still:\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Merge Join (cost=1.07..16.07 rows=1 width=309)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Nested Loop (cost=0.00..149669.38 rows=1000 width=84)\n Join Filter: (((\"inner\".debitor_id || '-'::text) || \n\"inner\".address_seqnr) = \"outer\".old_id)\n -> Index Scan using aux_address_idx2 on aux_address aa \n(cost=0.00..81.88 rows=3989 width=16)\n -> Seq Scan on list_base_regions (cost=0.00..20.00 rows=1000 \nwidth=68)\n -> Sort (cost=1.07..1.08 rows=3 width=225)\n Sort Key: address.id\n -> Seq Scan on address (cost=0.00..1.05 rows=3 width=225)\n Filter: ((id = 1) IS NOT TRUE)\n(10 rows)\n\n\n\n> Hard to tell exactly what's going on though\n\nDoes this help?\n\n\n\n\nNB: it seems the data types part of the manual doesn't enlighten me on this \nsubject, any suggestions where to find more input?\n\n\n\n\n\nRegards,\n\n\n\n\nFrank.\n", "msg_date": "Tue, 3 Dec 2002 10:38:10 +0100", "msg_from": "\"Frank van Vugt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n UNION" }, { "msg_contents": "\"Frank van Vugt\" <[email protected]> writes:\n> Since the left side of the join clause is composed out of three concatenated \n> text-parts resulting in one single piece of type text, I'd expect the\n> planner to avoid the nested loop.\n\nProbably not, since the first thing it does is to flatten the\nsub-select, leaving it with a concatenation expression in the\nWHERE-clause. (I was too sleepy last night to realize that you\nwere comparing a concatenation to old_id, rather than making two\nseparate comparisons :-()\n\nWe really need to fix the planner to be able to do merge/hash on\n\"arbitrary expression = arbitrary expression\", not only \"Var = Var\".\nIIRC, this is doable in principle, but there are a few routines that\nwould need to be improved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Dec 2002 09:35:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n\tUNION" }, { "msg_contents": "> Probably not, since the first thing it does is to flatten the\n> sub-select, leaving it with a concatenation expression in the\n> WHERE-clause. \n\nAh, I see.\n\nSo, I'll just split this thingy into two seperate queries, starting with \ncreating a temp table containing the straight subselect results.\n\n> We really need to fix the planner to be able to do merge/hash on\n> \"arbitrary expression = arbitrary expression\", not only \"Var = Var\".\n\nI can get around it, so I'm not complaining ;-)\n\n\nTom, thanks a *lot* for the prompt responses !!\n\n\n\nBest,\n\n\n\n\n\nFrank.\n\n", "msg_date": "Tue, 3 Dec 2002 16:23:51 +0100", "msg_from": "Frank van Vugt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 versus v7.3 -> huge performance penalty for JOIN with\n UNION" } ]
[ { "msg_contents": "Hi Li Li, \n\n> \n> I'm thinking about separating a table with up to millions of rows into\n> several tables with the same set of columns to speed up some complex\n> queries. \n\nI thought of doing this recently, as queries were taking so long. Instead\nof breaking the table up, we clustered the data. This physically moves all\nthe data by key close to each other on disk (sounds kind of like defragging\na disk). This boosts query responses no end - for example our table has ~\n10 million rows, a query that was taking 45 seconds to return, now takes 7\nseconds. To keep the table tidy, we run the cluster regularly.\n\n> As the size of the original table is increasing fast, I want\n> to get it separated once the size grows up to a limit. So there\n> will be a large amount of tables (having same structure) in a \n> database. Is\n> there any potential performance problem with this design?\n> \n\nI think the problems would mainly be in management, as you would have to\nkeep track of the new table names, key names, and index names. \n\nNikk\n\n\n\n\n\nRE: [PERFORM] Is there any limitations\n\n\nHi Li Li, \n\n> \n> I'm thinking about separating a table with up to millions of rows into\n> several tables with the same set of columns to speed up some complex\n> queries. \n\nI thought of doing this recently, as queries were taking so long.  Instead of breaking the table up, we clustered the data.  This physically moves all the data by key close to each other on disk (sounds kind of like defragging a disk).  This boosts query responses no end - for example our table has ~ 10 million rows, a query that was taking 45 seconds to return, now takes 7 seconds.  To keep the table tidy, we run the cluster regularly.\n> As the size of the original table is increasing fast, I want\n> to get it separated once the size grows up to a limit. So there\n> will be a large amount of tables (having same structure) in a \n> database. Is\n> there any potential performance problem with this design?\n> \n\nI think the problems would mainly be in management, as you would have to keep track of the new table names, key names, and index names.  \nNikk", "msg_date": "Tue, 3 Dec 2002 13:41:14 -0000 ", "msg_from": "Nikk Anderson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is there any limitations" }, { "msg_contents": "Hi Nikk,\n\n> I thought of doing this recently, as queries were taking so long. Instead\n> of breaking the table up, we clustered the data. This physically moves all\n> the data by key close to each other on disk (sounds kind of like defragging\n> a disk). This boosts query responses no end - for example our table has ~\n> 10 million rows, a query that was taking 45 seconds to return, now takes 7\n> seconds. To keep the table tidy, we run the cluster regularly.\n>\nI've clustered the data with a non-key attribute. Now the query time is\nabout couple of minutes, but I expect less than one minute. Is there any trick\nin using cluster? I found that the primary key disappeared after\nclustering. Or it's better to cluster with primary key? My primary key is\na composite. I picked one attribute as cluster key.\n\n> > As the size of the original table is increasing fast, I want\n> > to get it separated once the size grows up to a limit. So there\n> > will be a large amount of tables (having same structure) in a\n> > database. Is\n> > there any potential performance problem with this design?\n> >\n>\n> I think the problems would mainly be in management, as you would have to\n> keep track of the new table names, key names, and index names.\n>\nYou are right. I have to keep track of these table names.\nHowever, I don't see any necessity for key names or index names. Because,\nas I metioned above, all these tables have exactly same structure.\n\nThanks for quick response.\n\nLi Li\n\n", "msg_date": "Tue, 3 Dec 2002 12:06:09 -0800 (PST)", "msg_from": "li li <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there any limitations" } ]
[ { "msg_contents": "I am having some problems with user-defined functions in version 7.3. The\nplanner seems to refuse to use an index that I have created when I define an\nSQL function that should take advantage of it. The thing that is driving me\nnuts is that if I take the SQL from the function definition and run it\nexactly as it is, replacing the parameters with real values, then it does\nuse the index and performs beautifully. I never saw this problem until I\nupgraded from 7.2.3 to 7.3.\n\nAt the bottom of this email, I have included a psql test input file and the\nresults. I have an index on zip_locs(dist1,dist2,dist3,dist4). I'm joining\na table of about 350,000 rows (mytable) against another table of about\n42,000 rows (zip_locs) on a ZIP code. The ZIP fields in both tables are\nindexed as well. The functions zip_dist[1234](varchar) return the\nrespective dist[1234] value for the given ZIP code. The zip_lat(varchar)\nand zip_lng(varchar) functions return the latitude and longitude for the\ngiven ZIP code, respectively. All these functions are immutable so they\nhave virtually no effect on the speed of the query. The point of the query\nis to get a count of records in mytable that are within a certain distance\nof a given ZIP code.\n\nWhen I do the explicit SELECT, it uses the aforementioned index and then\nfilters on the result of the earth_distance(real,real,real,real) function.\nWhen I run the radiuscount(varchar,real) function, it apparently does a\nsequential scan instead of using the index.\n\nI have tried rewriting this query every way I know how, but nothing seems to\nwork. Can anybody help me with this?\n\nHere is the psql input file I'm using to demonstrate:\n*******************\nCREATE OR REPLACE FUNCTION radiuscount(varchar, real) RETURNS bigint AS\n'\nSELECT COUNT(*)\nFROM mytable JOIN zip_locs ON zip = zip_code\nWHERE\n dist1 BETWEEN zip_dist1($1) - $2::real AND zip_dist1($1) +\n$2::real\n AND dist2 BETWEEN zip_dist2($1) - $2::real AND zip_dist2($1) +\n$2::real\n AND dist3 BETWEEN zip_dist3($1) - $2::real AND zip_dist3($1) +\n$2::real\n AND dist4 BETWEEN zip_dist4($1) - $2::real AND zip_dist4($1) +\n$2::real\n AND earth_distance(zip_lat($1), zip_lng($1), lat, lng) < $2::real\n' LANGUAGE 'SQL'\nSTABLE\nRETURNS NULL ON NULL INPUT\n;\n\n\\timing\n\\a\n\\t\n\n\\echo\n\\echo 'NOT using the function'\nSELECT COUNT(*) AS radiuscount\nFROM mytable JOIN zip_locs ON zip = zip_code\nWHERE\n dist1 BETWEEN zip_dist1('30096') - 20::real AND\nzip_dist1('30096') + 20::real\n AND dist2 BETWEEN zip_dist2('30096') - 20::real AND\nzip_dist2('30096') + 20::real\n AND dist3 BETWEEN zip_dist3('30096') - 20::real AND\nzip_dist3('30096') + 20::real\n AND dist4 BETWEEN zip_dist4('30096') - 20::real AND\nzip_dist4('30096') + 20::real\n AND earth_distance(zip_lat('30096'), zip_lng('30096'), lat, lng) <\n20::real\n;\n\n\\echo\n\\echo 'Using the function'\nselect radiuscount('30096',20);\n*******************\n\nAnd here is the output:\n*******************\nCREATE FUNCTION\nTiming is on.\nOutput format is unaligned.\nShowing only tuples.\n\nNOT using the function\n2775\nTime: 584.02 ms\n\nUsing the function\n2775\nTime: 11693.56 ms\n*******************\n\n\n", "msg_date": "Tue, 3 Dec 2002 14:11:43 -0500", "msg_from": "\"Ben Gunter\" <[email protected]>", "msg_from_op": true, "msg_subject": "v7.3 planner and user-defined functions" } ]
[ { "msg_contents": "\nI have the following query:\n\nSELECT p.userid, p.year, a.country, a.province, a.city\nFROM profile p, account a \nWHERE p.userid=a.userid AND \n\t(p.year BETWEEN 1961 AND 1976) AND \n\ta.country='CA' AND \n\ta.province='BC' AND \n\tp.gender='f' AND \n\tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND \n\tblock.userid IS NOT NULL AND \n\tp.userid IN \n\t(SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN \n\tf.minage AND f.maxage)\n\nIn plain English, it is that \n\nJoe finds females between the ages in the location who is not in the block table, while Joe's age is between what they \nprefer.\n\nThe query plan is the followings:\n\nNested Loop (cost=0.00..127.12 rows=995 width=894)\n -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n SubPlan\n -> Materialize (cost=22.50..22.50 rows=5 width=55)\n -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n)\n -> Materialize (cost=44.82..44.82 rows=111 width=89)\n -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n\nIt seems take quite long to run this query. How to optimise the query?\n\nThanks for your input.\n\nVernon\n \n\n\n", "msg_date": "Wed, 04 Dec 2002 16:26:14 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Is a better way to have the same result of this query?" }, { "msg_contents": "On Wed, 2002-12-04 at 18:26, Vernon Wu wrote:\n> I have the following query:\n> \n> SELECT p.userid, p.year, a.country, a.province, a.city\n> FROM profile p, account a \n> WHERE p.userid=a.userid AND \n> \t(p.year BETWEEN 1961 AND 1976) AND \n> \ta.country='CA' AND \n> \ta.province='BC' AND \n> \tp.gender='f' AND \n> \tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND \n> \tblock.userid IS NOT NULL AND \n> \tp.userid IN \n> \t(SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN \n> \tf.minage AND f.maxage)\n> \n> In plain English, it is that \n> \n> Joe finds females between the ages in the location who is not in the block table, while Joe's age is between what they \n> prefer.\n> \n> The query plan is the followings:\n> \n> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n> SubPlan\n> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n> )\n> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n> \n> It seems take quite long to run this query. How to optimise the query?\n> \n> Thanks for your input.\n> \n> Vernon\n\nWhat kind of indexes, if any, do you have on, and what is the\ncardinality of account, block and preference?\n\nWhat version of Postgres are you using?\n\nHow much shared memory and buffers are you using?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "04 Dec 2002 23:26:48 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "Vernon Wu wrote:\n> \n> SELECT p.userid, p.year, a.country, a.province, a.city\n> FROM profile p, account a \n> WHERE p.userid=a.userid AND \n> \t(p.year BETWEEN 1961 AND 1976) AND \n> \ta.country='CA' AND \n> \ta.province='BC' AND \n> \tp.gender='f' AND \n> \tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND \n> \tblock.userid IS NOT NULL AND \n> \tp.userid IN \n> \t(SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN \n> \tf.minage AND f.maxage)\n\nYou might want to flatten this into more joins and less subqueries, \nespecially since you are using IN which is not very optimized:\n\nSELECT p.userid, p.year, a.country, a.province, a.city\nFROM profile p, account a, preference f, profile p1\nWHERE\n\tf.userid = p.userid AND\n\tp.userid=a.userid AND\n\t(p.year BETWEEN 1961 AND 1976) AND\n\ta.country='CA' AND\n\ta.province='BC' AND\n\tp.gender='f' AND\n\tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND\n\tblock.userid IS NOT NULL AND\n\tp1.userid='Joe' AND\n\t2002-p1.year BETWEEN f.minage AND f.maxage\n\nAlso, I am not sure about the NOT IN. If you can rewrite it using EXISTS \ntry that, it might be faster.\n\n\n> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n> SubPlan\n> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n> )\n> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n\nrows=1000 usually indicates you didn't vacuum analyze. Did you?\n\n> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n\nAnd to add to Vernons questions: if you are using PostgreSQL 7.2 or \nlater, please send us the EXPLAIN ANALYZE output.\n\nJochem\n\n", "msg_date": "Thu, 05 Dec 2002 11:01:15 +0100", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this query?" }, { "msg_contents": "Jochem,\n\nThanks for your suggestion/information. \n\nThe followings are the analyise outcomes after I did some modifications with the query. My finding is that the new query \ndoes improve the performance according to the plan. The actual time is reversed might due to the fact the test data is \nvery small (the machine is a very old one by the way). The userid is the key for all tables and the gender is indexed. Do I \nalso index the country and province to improve the preformance?\n\nThe modified query with the suggested flatting query.\n\nNested Loop (cost=0.00..91.97 rows=995 width=445) (actual time=1.00..3.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..62.02 rows=1 width=445) (actual time=1.00..3.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..34.68 rows=1 width=378) (actual time=1.00..3.00 rows=3 loops=1)\n -> Nested Loop (cost=0.00..29.84 rows=1 width=366) (actual time=1.00..3.00 rows=3 loops=1)\n -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289) (actual time=0.00..0.00 rows=3 loops=1)\n -> Index Scan using pk_preference on preference f (cost=0.00..4.82 rows=1 width=77) (actual time=\n0.67..1.00 rows=1 loops=3)\n -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12) (actual time=0.00..0.00 rows=\n1 loops=3)\n -> Index Scan using pk_profile on profile p (cost=0.00..27.33 rows=1 width=67) (actual time=0.00..0.00 rows=1 \nloops=3)\n SubPlan\n -> Materialize (cost=22.50..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=2)\n -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=1)\n -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0) (actual time=0.00..0.00 rows=1 loops=2)\nTotal runtime: 5.00 msec\n\nAfter replacing \"p.userid NOT IN\" with \"NOT EXISTS\":\n\nResult (cost=0.00..61.56 rows=995 width=445) (actual time=3.00..4.00 rows=2 loops=1)\n InitPlan\n -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=1)\n -> Nested Loop (cost=0.00..61.56 rows=995 width=445) (actual time=3.00..4.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..31.61 rows=1 width=445) (actual time=3.00..4.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..26.77 rows=1 width=433) (actual time=2.00..3.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..21.93 rows=1 width=356) (actual time=2.00..2.00 rows=2 loops=1)\n -> Index Scan using profile_sex_idx on profile p (cost=0.00..17.09 rows=1 width=67) (actual time=\n1.00..1.00 rows=2 loops=1)\n -> Index Scan using pk_account on account a (cost=0.00..4.83 rows=1 width=289) (actual time=\n0.50..0.50 rows=1 loops=2)\n -> Index Scan using pk_preference on preference f (cost=0.00..4.82 rows=1 width=77) (actual time=\n0.50..0.50 rows=1 loops=2)\n -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12) (actual time=0.50..0.50 rows=\n1 loops=2)\n -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0) (actual time=0.00..0.00 rows=1 loops=2)\nTotal runtime: 7.00 msec\n\nAfter vacuum analyze:\n\nResult (cost=3.19..5.29 rows=1 width=91) (actual time=3.00..4.00 rows=2 loops=1)\n InitPlan\n -> Seq Scan on block b (cost=0.00..1.01 rows=1 width=7) (actual time=0.00..0.00 rows=0 loops=1)\n -> Nested Loop (cost=3.19..5.29 rows=1 width=91) (actual time=3.00..4.00 rows=2 loops=1)\n -> Hash Join (cost=3.19..4.27 rows=1 width=91) (actual time=3.00..3.00 rows=2 loops=1)\n -> Hash Join (cost=2.13..3.20 rows=1 width=72) (actual time=2.00..2.00 rows=3 loops=1)\n -> Seq Scan on account a (cost=0.00..1.04 rows=3 width=31) (actual time=1.00..1.00 rows=3 loops=1)\n -> Hash (cost=2.13..2.13 rows=1 width=41) (actual time=0.00..0.00 rows=0 loops=1)\n -> Nested Loop (cost=0.00..2.13 rows=1 width=41) (actual time=0.00..0.00 rows=3 loops=1)\n -> Seq Scan on profile p1 (cost=0.00..1.04 rows=1 width=12) (actual time=0.00..0.00 rows=1 \nloops=1)\n -> Seq Scan on preference f (cost=0.00..1.03 rows=3 width=29) (actual time=0.00..0.00 rows=3 \nloops=1)\n -> Hash (cost=1.05..1.05 rows=2 width=19) (actual time=1.00..1.00 rows=0 loops=1)\n -> Seq Scan on profile p (cost=0.00..1.05 rows=2 width=19) (actual time=1.00..1.00 rows=2 loops=1)\n -> Seq Scan on block (cost=0.00..1.01 rows=1 width=0) (actual time=0.00..0.00 rows=1 loops=2)\nTotal runtime: 7.00 msec\n\nThe original query\n\nNested Loop (cost=0.00..127.12 rows=995 width=894) (actual time=1.00..2.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..97.17 rows=1 width=894) (actual time=1.00..1.00 rows=2 loops=1)\n -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289) (actual time=0.00..0.00 rows=3 loops=1)\n -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605) (actual time=0.33..0.33 rows=1 \nloops=3)\n SubPlan\n -> Materialize (cost=22.50..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=2)\n -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=1)\n -> Materialize (cost=44.82..44.82 rows=111 width=89) (actual time=0.50..0.50 rows=1 loops=2)\n -> Nested Loop (cost=0.00..44.82 rows=111 width=89) (actual time=0.00..0.00 rows=3 loops=1)\n -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12) (actual time=\n0.00..0.00 rows=1 loops=1)\n -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77) (actual time=0.00..0.00 rows=\n3 loops=1)\n -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0) (actual time=0.00..0.00 rows=1 loops=2)\nTotal runtime: 4.00 msec\n\nAfter replacing \"p.userid NOT IN\" with \"NOT EXISTS\":\n\nResult (cost=0.00..104.62 rows=995 width=894) (actual time=1.00..2.00 rows=2 loops=1)\n InitPlan\n -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55) (actual time=0.00..0.00 rows=0 loops=1)\n -> Nested Loop (cost=0.00..104.62 rows=995 width=894) (actual time=1.00..1.00 rows=2 loops=1)\n -> Nested Loop (cost=0.00..74.67 rows=1 width=894) (actual time=1.00..1.00 rows=2 loops=1)\n -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289) (actual time=0.00..0.00 rows=3 loops=1)\n -> Index Scan using pk_profile on profile p (cost=0.00..49.66 rows=1 width=605) (actual time=0.33..0.33 \nrows=1 loops=3)\n SubPlan\n -> Materialize (cost=44.82..44.82 rows=111 width=89) (actual time=0.50..0.50 rows=1 loops=2)\n -> Nested Loop (cost=0.00..44.82 rows=111 width=89) (actual time=0.00..1.00 rows=3 loops=1)\n -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12) (actual time=\n0.00..0.00 rows=1 loops=1)\n -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77) (actual time=0.00..1.00 \nrows=3 loops=1)\n -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0) (actual time=0.00..0.00 rows=1 loops=2)\nTotal runtime: 4.00 msec\n\nAfter vacuum analyze:\n\nResult (cost=7.30..9.39 rows=1 width=63) (actual time=3.00..3.00 rows=2 loops=1)\n InitPlan\n -> Seq Scan on block b (cost=0.00..1.01 rows=1 width=7) (actual time=0.00..0.00 rows=0 loops=1)\n -> Nested Loop (cost=7.30..9.39 rows=1 width=63) (actual time=3.00..3.00 rows=2 loops=1)\n -> Seq Scan on block (cost=0.00..1.01 rows=1 width=0) (actual time=0.00..0.00 rows=1 loops=1)\n -> Materialize (cost=8.37..8.37 rows=1 width=63) (actual time=3.00..3.00 rows=2 loops=1)\n -> Hash Join (cost=7.30..8.37 rows=1 width=63) (actual time=2.00..3.00 rows=2 loops=1)\n -> Seq Scan on account a (cost=0.00..1.04 rows=3 width=31) (actual time=0.00..0.00 rows=3 loops=1)\n -> Hash (cost=7.30..7.30 rows=1 width=32) (actual time=2.00..2.00 rows=0 loops=1)\n -> Index Scan using profile_sex_idx on profile p (cost=0.00..7.30 rows=1 width=32) (actual time=\n2.00..2.00 rows=2 loops=1)\n SubPlan\n -> Materialize (cost=2.13..2.13 rows=1 width=41) (actual time=0.50..0.50 rows=1 loops=2)\n -> Nested Loop (cost=0.00..2.13 rows=1 width=41) (actual time=1.00..1.00 rows=3 loops=1)\n -> Seq Scan on profile p1 (cost=0.00..1.04 rows=1 width=12) (actual time=0.00..0.00 \nrows=1 loops=1)\n -> Seq Scan on preference f (cost=0.00..1.03 rows=3 width=29) (actual time=0.00..0.00 \nrows=3 loops=1)\nTotal runtime: 4.00 msec\n\n\n12/5/2002 2:01:15 AM, Jochem van Dieten <[email protected]> wrote:\n\n>Vernon Wu wrote:\n>> \n>> SELECT p.userid, p.year, a.country, a.province, a.city\n>> FROM profile p, account a \n>> WHERE p.userid=a.userid AND \n>> \t(p.year BETWEEN 1961 AND 1976) AND \n>> \ta.country='CA' AND \n>> \ta.province='BC' AND \n>> \tp.gender='f' AND \n>> \tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND \n>> \tblock.userid IS NOT NULL AND \n>> \tp.userid IN \n>> \t(SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN \n>> \tf.minage AND f.maxage)\n>\n>You might want to flatten this into more joins and less subqueries, \n>especially since you are using IN which is not very optimized:\n>\n>SELECT p.userid, p.year, a.country, a.province, a.city\n>FROM profile p, account a, preference f, profile p1\n>WHERE\n>\tf.userid = p.userid AND\n>\tp.userid=a.userid AND\n>\t(p.year BETWEEN 1961 AND 1976) AND\n>\ta.country='CA' AND\n>\ta.province='BC' AND\n>\tp.gender='f' AND\n>\tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND\n>\tblock.userid IS NOT NULL AND\n>\tp1.userid='Joe' AND\n>\t2002-p1.year BETWEEN f.minage AND f.maxage\n>\n>Also, I am not sure about the NOT IN. If you can rewrite it using EXISTS \n>try that, it might be faster.\n>\n>\n>> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n>> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n>> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n>> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n>> SubPlan\n>> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n>> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n>> )\n>> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n>> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n>> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n>> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n>\n>rows=1000 usually indicates you didn't vacuum analyze. Did you?\n>\n>> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n>\n>And to add to Vernons questions: if you are using PostgreSQL 7.2 or \n>later, please send us the EXPLAIN ANALYZE output.\n>\n>Jochem\n>\n>\n\n\n\n", "msg_date": "Thu, 05 Dec 2002 10:44:34 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this query?" }, { "msg_contents": "Ron,\n\nThe gender is indexed. Each user has account and preference, but not necessary block.\n\nI am currently seeking for query optimisation, not system configuration optimisation\n \n12/4/2002 9:26:48 PM, Ron Johnson <[email protected]> wrote:\n\n>On Wed, 2002-12-04 at 18:26, Vernon Wu wrote:\n>> I have the following query:\n>> \n>> SELECT p.userid, p.year, a.country, a.province, a.city\n>> FROM profile p, account a \n>> WHERE p.userid=a.userid AND \n>> \t(p.year BETWEEN 1961 AND 1976) AND \n>> \ta.country='CA' AND \n>> \ta.province='BC' AND \n>> \tp.gender='f' AND \n>> \tp.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND \n>> \tblock.userid IS NOT NULL AND \n>> \tp.userid IN \n>> \t(SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN \n>> \tf.minage AND f.maxage)\n>> \n>> In plain English, it is that \n>> \n>> Joe finds females between the ages in the location who is not in the block table, while Joe's age is between what \nthey \n>> prefer.\n>> \n>> The query plan is the followings:\n>> \n>> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n>> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n>> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n>> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n>> SubPlan\n>> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n>> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n>> )\n>> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n>> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n>> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n>> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n>> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n>> \n>> It seems take quite long to run this query. How to optimise the query?\n>> \n>> Thanks for your input.\n>> \n>> Vernon\n>\n>What kind of indexes, if any, do you have on, and what is the\n>cardinality of account, block and preference?\n>\n>What version of Postgres are you using?\n>\n>How much shared memory and buffers are you using?\n>\n>-- \n>+------------------------------------------------------------+\n>| Ron Johnson, Jr. mailto:[email protected] |\n>| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n>| |\n>| \"they love our milk and honey, but preach about another |\n>| way of living\" |\n>| Merle Haggard, \"The Fighting Side Of Me\" |\n>+------------------------------------------------------------+\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n\n", "msg_date": "Thu, 05 Dec 2002 11:08:17 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "Vernon Wu wrote:\n\n > Ron,\n >\n > The gender is indexed. Each user has account and preference, but not \nnecessary block.\n\n\nIndexing on gender won't speed up your query - it can even slow it down. \nYou have probably 50% of \"f\" and 50% of \"m\". Using index on gender will \ndivide your potential answers by 2. Make index on columns, which \nexcludes much more useless rows.\nI think you can create index on:\n- block/personid\n- profile/userid\n\nI read in Postgres documentation(but didn't try) that you can also \nchange \"id NOT IN (select id\" to \"not exists select * where id=\". It may \nhelp also.\n\nDo user have more than one account or preference?\nIf no, you can change \"not in\" into \"inner/outer join\" which are the \nbest ones.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Thu, 05 Dec 2002 20:37:27 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "It is now common knowledge that the IN clause should be rewriten as an\nEXISTS.\n\nSELECT p.userid, p.year, a.country, a.province, a.city\nFROM profile p, account a\nWHERE p.userid=a.userid AND\n (p.year BETWEEN 1961 AND 1976) AND\n a.country='CA' AND\n a.province='BC' AND\n p.gender='f' AND\n NOT EXISTS ( SELECT 1 FROM block b WHERE b.personid='Joe' AND p.userid\n= b.userid) AND\n block.userid IS NOT NULL AND\n EXISTS ( SELECT 1 FROM preference f, profile p1 \n WHERE p1.userid='Joe' AND p.userid = f.userif AND\n 2002-p1.year BETWEEN f.minage AND f.maxage);\n\n\n\nVernon Wu wrote:\n> \n> Ron,\n> \n> The gender is indexed. Each user has account and preference, but not necessary block.\n> \n> I am currently seeking for query optimisation, not system configuration optimisation\n> \n> 12/4/2002 9:26:48 PM, Ron Johnson <[email protected]> wrote:\n> \n> >On Wed, 2002-12-04 at 18:26, Vernon Wu wrote:\n> >> I have the following query:\n> >>\n> >> SELECT p.userid, p.year, a.country, a.province, a.city\n> >> FROM profile p, account a\n> >> WHERE p.userid=a.userid AND\n> >> (p.year BETWEEN 1961 AND 1976) AND\n> >> a.country='CA' AND\n> >> a.province='BC' AND\n> >> p.gender='f' AND\n> >> p.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND\n> >> block.userid IS NOT NULL AND\n> >> p.userid IN\n> >> (SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN\n> >> f.minage AND f.maxage)\n> >>\n> >> In plain English, it is that\n> >>\n> >> Joe finds females between the ages in the location who is not in the block table, while Joe's age is between what\n> they\n> >> prefer.\n> >>\n> >> The query plan is the followings:\n> >>\n> >> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n> >> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n> >> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n> >> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n> >> SubPlan\n> >> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n> >> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n> >> )\n> >> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n> >> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n> >> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n> >> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n> >> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n> >>\n", "msg_date": "Thu, 05 Dec 2002 14:45:26 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "On Thu, Dec 05, 2002 at 11:08:17AM -0800, Vernon Wu wrote:\n> Ron,\n> \n> The gender is indexed. \n\nGiven that gender only has two (? Very few, anyway) values, I can't\nbelieve an index will be much use: it's not very selective. Maybe\ncombining several columns in one index will help you.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 5 Dec 2002 14:57:50 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "\nI just learnt the \"common knowledge\" about four hourse ago. That does help to improve the performance indeed \naccording to the explain command.\n\n12/5/2002 11:45:26 AM, Jean-Luc Lachance <[email protected]> wrote:\n\n>It is now common knowledge that the IN clause should be rewriten as an\n>EXISTS.\n>\n>SELECT p.userid, p.year, a.country, a.province, a.city\n>FROM profile p, account a\n>WHERE p.userid=a.userid AND\n> (p.year BETWEEN 1961 AND 1976) AND\n> a.country='CA' AND\n> a.province='BC' AND\n> p.gender='f' AND\n> NOT EXISTS ( SELECT 1 FROM block b WHERE b.personid='Joe' AND p.userid\n>= b.userid) AND\n> block.userid IS NOT NULL AND\n> EXISTS ( SELECT 1 FROM preference f, profile p1 \n> WHERE p1.userid='Joe' AND p.userid = f.userif AND\n> 2002-p1.year BETWEEN f.minage AND f.maxage);\n>\n>\n>\n>Vernon Wu wrote:\n>> \n>> Ron,\n>> \n>> The gender is indexed. Each user has account and preference, but not necessary block.\n>> \n>> I am currently seeking for query optimisation, not system configuration optimisation\n>> \n>> 12/4/2002 9:26:48 PM, Ron Johnson <[email protected]> wrote:\n>> \n>> >On Wed, 2002-12-04 at 18:26, Vernon Wu wrote:\n>> >> I have the following query:\n>> >>\n>> >> SELECT p.userid, p.year, a.country, a.province, a.city\n>> >> FROM profile p, account a\n>> >> WHERE p.userid=a.userid AND\n>> >> (p.year BETWEEN 1961 AND 1976) AND\n>> >> a.country='CA' AND\n>> >> a.province='BC' AND\n>> >> p.gender='f' AND\n>> >> p.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND\n>> >> block.userid IS NOT NULL AND\n>> >> p.userid IN\n>> >> (SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN\n>> >> f.minage AND f.maxage)\n>> >>\n>> >> In plain English, it is that\n>> >>\n>> >> Joe finds females between the ages in the location who is not in the block table, while Joe's age is between what\n>> they\n>> >> prefer.\n>> >>\n>> >> The query plan is the followings:\n>> >>\n>> >> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n>> >> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n>> >> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n>> >> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n>> >> SubPlan\n>> >> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n>> >> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n>> >> )\n>> >> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n>> >> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n>> >> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n>> >> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n>> >> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n>> >>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n\n", "msg_date": "Thu, 05 Dec 2002 12:35:08 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "Good for you! Too bad the parser does not know about it...\n\n\n\nVernon Wu wrote:\n> \n> I just learnt the \"common knowledge\" about four hourse ago. That does help to improve the performance indeed\n> according to the explain command.\n> \n> 12/5/2002 11:45:26 AM, Jean-Luc Lachance <[email protected]> wrote:\n> \n> >It is now common knowledge that the IN clause should be rewriten as an\n> >EXISTS.\n> >\n> >SELECT p.userid, p.year, a.country, a.province, a.city\n> >FROM profile p, account a\n> >WHERE p.userid=a.userid AND\n> > (p.year BETWEEN 1961 AND 1976) AND\n> > a.country='CA' AND\n> > a.province='BC' AND\n> > p.gender='f' AND\n> > NOT EXISTS ( SELECT 1 FROM block b WHERE b.personid='Joe' AND p.userid\n> >= b.userid) AND\n> > block.userid IS NOT NULL AND\n> > EXISTS ( SELECT 1 FROM preference f, profile p1\n> > WHERE p1.userid='Joe' AND p.userid = f.userif AND\n> > 2002-p1.year BETWEEN f.minage AND f.maxage);\n> >\n> >\n> >\n> >Vernon Wu wrote:\n> >>\n> >> Ron,\n> >>\n> >> The gender is indexed. Each user has account and preference, but not necessary block.\n> >>\n> >> I am currently seeking for query optimisation, not system configuration optimisation\n> >>\n> >> 12/4/2002 9:26:48 PM, Ron Johnson <[email protected]> wrote:\n> >>\n> >> >On Wed, 2002-12-04 at 18:26, Vernon Wu wrote:\n> >> >> I have the following query:\n> >> >>\n> >> >> SELECT p.userid, p.year, a.country, a.province, a.city\n> >> >> FROM profile p, account a\n> >> >> WHERE p.userid=a.userid AND\n> >> >> (p.year BETWEEN 1961 AND 1976) AND\n> >> >> a.country='CA' AND\n> >> >> a.province='BC' AND\n> >> >> p.gender='f' AND\n> >> >> p.userid NOT IN (SELECT b.userid FROM block b WHERE b.personid='Joe') AND\n> >> >> block.userid IS NOT NULL AND\n> >> >> p.userid IN\n> >> >> (SELECT f.userid FROM preference f, profile p1 WHERE p1.userid='Joe' AND 2002-p1.year BETWEEN\n> >> >> f.minage AND f.maxage)\n> >> >>\n> >> >> In plain English, it is that\n> >> >>\n> >> >> Joe finds females between the ages in the location who is not in the block table, while Joe's age is between what\n> >> they\n> >> >> prefer.\n> >> >>\n> >> >> The query plan is the followings:\n> >> >>\n> >> >> Nested Loop (cost=0.00..127.12 rows=995 width=894)\n> >> >> -> Nested Loop (cost=0.00..97.17 rows=1 width=894)\n> >> >> -> Seq Scan on account a (cost=0.00..25.00 rows=1 width=289)\n> >> >> -> Index Scan using pk_profile on profile p (cost=0.00..72.16 rows=1 width=605)\n> >> >> SubPlan\n> >> >> -> Materialize (cost=22.50..22.50 rows=5 width=55)\n> >> >> -> Seq Scan on block b (cost=0.00..22.50 rows=5 width=55\n> >> >> )\n> >> >> -> Materialize (cost=44.82..44.82 rows=111 width=89)\n> >> >> -> Nested Loop (cost=0.00..44.82 rows=111 width=89)\n> >> >> -> Index Scan using pk_profile on profile p1 (cost=0.00..4.82 rows=1 width=12)\n> >> >> -> Seq Scan on preference f (cost=0.00..20.00 rows=1000 width=77)\n> >> >> -> Seq Scan on block (cost=0.00..20.00 rows=995 width=0)\n> >> >>\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 5: Have you checked our extensive FAQ?\n> >\n> >http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n", "msg_date": "Thu, 05 Dec 2002 16:04:28 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "Vernon Wu wrote:\n> \n> The followings are the analyise outcomes after I did some modifications with the query. My finding is that the new query \n> does improve the performance according to the plan. The actual time is reversed might due to the fact the test data is \n> very small (the machine is a very old one by the way). The userid is the key for all tables and the gender is indexed. Do I \n> also index the country and province to improve the preformance?\n\nYou start by using a dataset of realistic size. Sorry, but if the actual \ntime is < 10.00 ms it is rather pointless to optimize further since \nchance is going to be the biggest factor. And the IN/EXISTS difference \nis dependent on dataset size.\n\nJochem\n\n", "msg_date": "Thu, 05 Dec 2002 22:06:03 +0100", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this query?" }, { "msg_contents": "\n\n12/5/2002 1:06:03 PM, Jochem van Dieten <[email protected]> wrote:\n\n>You start by using a dataset of realistic size. Sorry, but if the actual \n>time is < 10.00 ms it is rather pointless to optimize further since \n>chance is going to be the biggest factor. And the IN/EXISTS difference \n>is dependent on dataset size.\n>\n\nDo you mean that using \"EXIST\" is not necessary out-perform using 'IN\" even the \"explain\" say so? What is the right \nsize for those two key words?\n\nThanks for your very hepful information.\n\nVernon \n\n\n", "msg_date": "Thu, 05 Dec 2002 13:23:16 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this query?" }, { "msg_contents": "Vernon Wu wrote:\n> 12/5/2002 1:06:03 PM, Jochem van Dieten <[email protected]> wrote:\n> \n>>You start by using a dataset of realistic size. Sorry, but if the actual \n>>time is < 10.00 ms it is rather pointless to optimize further since \n>>chance is going to be the biggest factor. And the IN/EXISTS difference \n>>is dependent on dataset size.\n> \n> Do you mean that using \"EXIST\" is not necessary out-perform using 'IN\" even the \"explain\" say so? What is the right \n> size for those two key words?\n\nIIRC, IN might be faster on small datasets, but EXISTS is faster on big \nones. So you have to optimize with a dataset that resembles the actual \ndataset you will be using in production as close as possible. I don't \nknow what the size is at which one gets faster as the other.\n\nJochem\n\n", "msg_date": "Thu, 05 Dec 2002 23:05:57 +0100", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this query?" }, { "msg_contents": "Andrew,\n\nFollowing your suggestion, I have combined the year field with the gender to create a multicolumn index. That shall be \nbetter than indexing gender alone. I also create a multicolumn index (country, province, city) for the account table. \n\nWould you suggest indexing all possible fields such as ethnicity, religion\t, education, employment in the profile table; or \nbased on what queries I run, to have some multicolumn indexes?\n\nBTW, do you get a lot of snow in Toronto these few days?\n\nVeronon\n\n12/5/2002 11:57:50 AM, Andrew Sullivan <[email protected]> wrote:\n\n>On Thu, Dec 05, 2002 at 11:08:17AM -0800, Vernon Wu wrote:\n>> Ron,\n>> \n>> The gender is indexed. \n>\n>Given that gender only has two (? Very few, anyway) values, I can't\n>believe an index will be much use: it's not very selective. Maybe\n>combining several columns in one index will help you.\n>\n>A\n>\n>-- \n>----\n>Andrew Sullivan 204-4141 Yonge Street\n>Liberty RMS Toronto, Ontario Canada\n><[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n\n", "msg_date": "Thu, 05 Dec 2002 15:19:29 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "On Thu, Dec 05, 2002 at 04:04:28PM -0500, Jean-Luc Lachance wrote:\n> Good for you! Too bad the parser does not know about it...\n\nLAst I heard, there was a problem about providing a rigourous\nmathematical proof that NOT EXISTS and NOT IN are really the same. \nIf you can prove it, I'm sure people would be pleased.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 5 Dec 2002 18:56:22 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "On Thu, Dec 05, 2002 at 03:19:29PM -0800, Vernon Wu wrote:\n> Andrew,\n> \n> Would you suggest indexing all possible fields such as ethnicity,\n> religion , education, employment in the profile table; or based on\n> what queries I run, to have some multicolumn indexes?\n\nNever index anything more than you need. There is a fairly serious\npenalty at insertion time for indexes, so you can reduce some\noverhead that way. Note, too, that index space is not recaptured by\nPostgres's VACUUM, which imposes a small performance cost, but can be\na real disk-gobbler if you're not careful.\n\n> BTW, do you get a lot of snow in Toronto these few days?\n\nWe had some a few weeks ago. It's pretty clear right now.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 5 Dec 2002 18:58:35 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "On Thu, 5 Dec 2002, Vernon Wu wrote:\n\n> Andrew,\n> \n> Following your suggestion, I have combined the year field with the gender to create a multicolumn index. That shall be \n> better than indexing gender alone. I also create a multicolumn index (country, province, city) for the account table. \n> \n> Would you suggest indexing all possible fields such as ethnicity, religion\t, education, employment in the profile table; or \n> based on what queries I run, to have some multicolumn indexes?\n> \n> BTW, do you get a lot of snow in Toronto these few days?\n\nVernon, just so you know, for multi-column indexes to be useful in \nPostgresql, the columns need to be used in the same order they are \ndeclared in the index if you are using them for an order by.\n\nselect * from table order by sex, age;\n\ncould use the index\n\ncreate column table_sex_age on table (sex,age);\n\nbut would not use the index\n\ncreate column table_age_sex on table (age,sex);\n\nHowever, the order in a where clause portion doesn't really seem to \nmatter, so \n\nselect * from table where sex='m' and age>=38\n\nand\n\nselect * from table where age>=38 and sex='m' \n\nshould both be able to use the index.\n\nalso, you can use functional indexes, but the arguments in the where \nclause need the same basic form to be useful. So, if you commonly make a \nselect like this:\n\nselect * from table where age>50 and age<=59;\n\nthen you could make a functional index like :\n\ncreate index table_age_50_59 on table (age) where age>50 and age<=59;\n\nHowever, the query \n\nselect * from table where age>50 and age<=58;\n\nWouldn't use that index, since the age <= part doesn't match up. It could \npossible use a generic index on age though, i.e. one like\n\ncreate index table_age on table (age);\n\nBut that index will be larger than the partial one, and so the planner may \nskip using it and use a seq scan instead. Hard to say until your database \nis populated with some representational test data.\n\nSince these indexes will be only a small fraction of the total data, it \nwill often be advantageous to use them with a query.\n\nAfter you have a set of test data, then you can start looking at tuning \nrandom page cost and such to make your hardware perform properly for \nindividual queries. Well, hope that helps.\n\n", "msg_date": "Thu, 5 Dec 2002 17:18:10 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "12/5/2002 4:18:10 PM, \"scott.marlowe\" <[email protected]> wrote:\n\n>Vernon, just so you know, for multi-column indexes to be useful in \n>Postgresql, the columns need to be used in the same order they are \n>declared in the index if you are using them for an order by.\n>\n>select * from table order by sex, age;\n>\n>could use the index\n>\n>create column table_sex_age on table (sex,age);\n>\n>but would not use the index\n>\n>create column table_age_sex on table (age,sex);\n>\n\nI haven't have this case yet, might apply for some queries soon.\n\n>However, the order in a where clause portion doesn't really seem to \n>matter, so \n>\n>select * from table where sex='m' and age>=38\n>\n>and\n>\n>select * from table where age>=38 and sex='m' \n>\n>should both be able to use the index.\n>\n>also, you can use functional indexes, but the arguments in the where \n>clause need the same basic form to be useful. So, if you commonly make a \n>select like this:\n>\n>select * from table where age>50 and age<=59;\n>\n>then you could make a functional index like :\n>\n>create index table_age_50_59 on table (age) where age>50 and age<=59;\n>\n>However, the query \n>\n>select * from table where age>50 and age<=58;\n>\n>Wouldn't use that index, since the age <= part doesn't match up. It could \n>possible use a generic index on age though, i.e. one like\n>\n>create index table_age on table (age);\n>\n\nI didn't know the functional index. Thanks for the eductional information.\n\n>But that index will be larger than the partial one, and so the planner may \n>skip using it and use a seq scan instead. Hard to say until your database \n>is populated with some representational test data.\n>\n>Since these indexes will be only a small fraction of the total data, it \n>will often be advantageous to use them with a query.\n>\n>After you have a set of test data, then you can start looking at tuning \n>random page cost and such to make your hardware perform properly for \n>individual queries. Well, hope that helps.\n>\n>\n\nI will do some fine query tuning in the final test phase. Right now, I want to make sure the table design and queries are \non the right track.\n\nThat indeed helps.\n\nThanks,\n\nVernon \n\n\n\n", "msg_date": "Thu, 05 Dec 2002 16:43:58 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this" } ]
[ { "msg_contents": "I am not sure whether this is a know problem but we discovered this the\nother day.\nWe are using PostgreSQL 7.2.1 on Redhat 7.3.\n\nThe table has about over a million rows (~1.4).\n\nThe query concerned is of the form\n\nSELECT *\nFROM tblCompany\nWHERE lower(companyname) like 'company%'\nORDER BY companyname\nLIMIT 20,0\n\nThere is a functional index lower(companyname) for the like clause.\n\nWithout the LIMIT clause the query takes approximately 3-5 seconds to\nreturn.\nIf total number of rows returned without the LIMIT clause is greater\nthan 20 records, then the above query also takes th same amount of time.\nBut if the the total number of rows is 20 or less then the time taken\nfor the above query to return goes up to 20-30 seconds. Has anyone else\ncome across this. We have managed to get round it by performing a count\nfirst and only performing the LIMIT if there are enough rows but surely\nthe query should be able to do this itself!\n\nJohn Cartmell\n", "msg_date": "Thu, 5 Dec 2002 09:51:09 -0000", "msg_from": "\"john cartmell\" <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY ... LIMIT.. performance" }, { "msg_contents": "\"john cartmell\" <[email protected]> writes:\n> Without the LIMIT clause the query takes approximately 3-5 seconds to\n> return.\n> If total number of rows returned without the LIMIT clause is greater\n> than 20 records, then the above query also takes th same amount of time.\n> But if the the total number of rows is 20 or less then the time taken\n> for the above query to return goes up to 20-30 seconds.\n\nWhat does EXPLAIN (or better EXPLAIN ANALYZE) show for these various\ncases? Evidently the planner is shifting to a different plan because\nof the small LIMIT, but with no details it's hard to say anything\nuseful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Dec 2002 17:00:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT.. performance " }, { "msg_contents": "John,\n\n> I am not sure whether this is a know problem but we discovered this\n> the\n> other day.\n> We are using PostgreSQL 7.2.1 on Redhat 7.3.\n\nFirst of all, there are a few bug-fixes between 7.2.1 and 7.2.3. One\nrelates to backups, and another to security. So you should upgrade to\n7.2.3 immediately -- no init or restore from backup required (not\nversion 7.3, which has some significant changes).\n\n> The table has about over a million rows (~1.4).\n> \n> The query concerned is of the form\n> \n> SELECT *\n> FROM tblCompany\n> WHERE lower(companyname) like 'company%'\n> ORDER BY companyname\n> LIMIT 20,0\n> \n> There is a functional index lower(companyname) for the like clause.\n> \n> Without the LIMIT clause the query takes approximately 3-5 seconds to\n> return.\n> If total number of rows returned without the LIMIT clause is greater\n> than 20 records, then the above query also takes th same amount of\n> time.\n> But if the the total number of rows is 20 or less then the time taken\n> for the above query to return goes up to 20-30 seconds. Has anyone\n> else\n> come across this. We have managed to get round it by performing a\n> count\n> first and only performing the LIMIT if there are enough rows but\n> surely\n> the query should be able to do this itself!\n\nThis seems very odd. Please do the following:\n\n1) Post an EXPLAIN ANALYZE statement for the above query, with limit,\nthat returns in 3-5 seconds.\n2) Post an EXPLAIN ANALYZE for a query that returns slowly (20-30\nseconds).\n\nThanks!\n\n-Josh\n\n\n", "msg_date": "Thu, 05 Dec 2002 14:21:18 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT.. performance" } ]
[ { "msg_contents": "I have a question about some strange behavior on what should be a rather\neasy issue.\nI am not getting the query plan that I expect given the query and the\nindexes.\n\nI have a table with the following structure:\nCREATE TABLE tblCompany(\n intCmpID serial NOT NULL,\n vchCmpName varchar(60) NOT NULL,\n vchCmpAltName varchar(100) NULL,\n vchCmpPrevName varchar(60) NULL,\n intCmpParentID int NULL,\n intCmpOwnerEmpID int NOT NULL,\n dateCmpMaintained datetime NOT NULL,\n chrCmpStatus char(1) NOT NULL,\n intModifiedBy int NOT NULL,\n dateModifiedOn datetime NOT NULL,\n CONSTRAINT pkCompany PRIMARY KEY (intCmpID)\n) ;\n\nIt has the following index:\nCREATE INDEX idxCompany1 ON tblCompany(vchCmpName);\n\nWhen I run the following query in Postgres, I get the results I expect:\nCRMDB=> explain select * from tblCompany where vchCmpName = 'Gensler';\nNOTICE: QUERY PLAN:\n\nIndex Scan using idxcompany1 on tblcompany (cost=0.00..5.21 rows=1\nwidth=212)\n\nEXPLAIN\n\n\nThis work under both Windows and Linux.\n\nWhen I run the following query under Windows, I get what I expect:\nCRMDB=> explain select * from tblCompany where vchcmpname like 'Gensler%';\nNOTICE: QUERY PLAN:\n\nIndex Scan using idxcompany1 on tblcompany (cost=0.00..17.07 rows=1\nwidth=201)\n\nEXPLAIN\n\nHowever, when I run the same query under Linux, I get this:\nCRMDB=> explain select * from tblCompany where vchCmpName like 'Gensler%';\nNOTICE: QUERY PLAN:\n\nSeq Scan on tblcompany (cost=100000000.00..100000002.01 rows=1 width=212)\n\nEXPLAIN\n\nI really don't understand why this is happening, but I am hoping that\nsomeone on this list has an idea. The versions of Postgres that I am using\nare Windows 7.2.2 and Linux 7.2.1 and 7.2.2. The Windows version is the\ncompiled version that comes with Cygwin and the Linux versions are the RPMs\nthat come with Redhat 7.3, Mandrake 9.0 and the Redhat 7.3 RPM from the\nPostgres site.\n\nIf anyone has an ideas suggestions I would really appreciate it.\n\nTIA\nEric\n\n", "msg_date": "Thu, 5 Dec 2002 08:34:14 -0800", "msg_from": "\"Eric Theis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index question with LIKE keyword" }, { "msg_contents": "On Thu, 5 Dec 2002, Eric Theis wrote:\n\n> This work under both Windows and Linux.\n>\n> When I run the following query under Windows, I get what I expect:\n> CRMDB=> explain select * from tblCompany where vchcmpname like 'Gensler%';\n> NOTICE: QUERY PLAN:\n>\n> Index Scan using idxcompany1 on tblcompany (cost=0.00..17.07 rows=1\n> width=201)\n>\n> EXPLAIN\n>\n> However, when I run the same query under Linux, I get this:\n> CRMDB=> explain select * from tblCompany where vchCmpName like 'Gensler%';\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on tblcompany (cost=100000000.00..100000002.01 rows=1 width=212)\n>\n> EXPLAIN\n>\n> I really don't understand why this is happening, but I am hoping that\n> someone on this list has an idea. The versions of Postgres that I am using\n> are Windows 7.2.2 and Linux 7.2.1 and 7.2.2. The Windows version is the\n> compiled version that comes with Cygwin and the Linux versions are the RPMs\n> that come with Redhat 7.3, Mandrake 9.0 and the Redhat 7.3 RPM from the\n> Postgres site.\n>\n> If anyone has an ideas suggestions I would really appreciate it.\n\nThe linux box is probably not running in \"C\" locale (or at least initdb\nwasn't run in \"C\" locale). The optimization for using indexes on like\ncurrently only works in that locale (because there are issues in some/many\nother locales that makes the transformation invalid). There's been talk\nabout this issue on (I think) -general (or if not there then -hackers)\nrecently.\n\n\n", "msg_date": "Thu, 5 Dec 2002 08:47:29 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index question with LIKE keyword" } ]
[ { "msg_contents": "\nHi,\n\n\nI have got there SCSIs HDDs for the postgresql Box\ni plan to install OS on 1st , 2nd for tables and indexes and third \nfor pg_xlog.\n\nBruce Momjian's \"H/W Perf tuning\" on\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node11.html\nrecommends 8k blocksize for the filesystem which is same as the page size.\n\nOn linux ext2 fs (on my server) 4096 is the default blocak size and 8192 bytes\nis not supported.\n\nbut man mkfs.ext2 on linux mentions an option -T which is:\n-T fs-type\n Specify how the filesystem is going to be used, so that mke2fs can chose optimal filesystem parameters for that\n use. The supported filesystem types are:\n news one inode per 4kb block\n largefile one inode per megabyte\n largefile4 one inode per 4 megabytes\n\nis the above relevent as far as optimisation for filesystem for tables is concerned?\n\nAlso for the pg_xlog drive is a particular block size more favourable then others?\n\n\nregds\nmallah.\n\n\n\n\n\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Fri, 6 Dec 2002 02:11:27 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Filesystem optimisation for postgresql tables and WAL logs on linux." }, { "msg_contents": "\nI don't remember saying they should match, it is just that it is nice if\nit does, but I don't think it would make any major difference in\nperformance. In fact, some use 32k pages sizes, and get a performance\nboost, and clearly don't have 32k file system blocks.\n\n---------------------------------------------------------------------------\n\nRajesh Kumar Mallah. wrote:\n> \n> Hi,\n> \n> \n> I have got there SCSIs HDDs for the postgresql Box\n> i plan to install OS on 1st , 2nd for tables and indexes and third \n> for pg_xlog.\n> \n> Bruce Momjian's \"H/W Perf tuning\" on\n> http://www.ca.postgresql.org/docs/momjian/hw_performance/node11.html\n> recommends 8k blocksize for the filesystem which is same as the page size.\n> \n> On linux ext2 fs (on my server) 4096 is the default blocak size and 8192 bytes\n> is not supported.\n> \n> but man mkfs.ext2 on linux mentions an option -T which is:\n> -T fs-type\n> Specify how the filesystem is going to be used, so that mke2fs can chose optimal filesystem parameters for that\n> use. The supported filesystem types are:\n> news one inode per 4kb block\n> largefile one inode per megabyte\n> largefile4 one inode per 4 megabytes\n> \n> is the above relevent as far as optimisation for filesystem for tables is concerned?\n> \n> Also for the pg_xlog drive is a particular block size more favourable then others?\n> \n> \n> regds\n> mallah.\n> \n> \n> \n> \n> \n> \n> -- \n> Rajesh Kumar Mallah,\n> Project Manager (Development)\n> Infocom Network Limited, New Delhi\n> phone: +91(11)6152172 (221) (L) ,9811255597 (M)\n> \n> Visit http://www.trade-india.com ,\n> India's Leading B2B eMarketplace.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Dec 2002 17:42:49 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem optimisation for postgresql tables and WAL" } ]
[ { "msg_contents": "Tomasz,\n\nI am under the impression that a primary key field is automically as a unique index. It seems to be correct after I verify it \nwith the page http://www.commandprompt.com/ppbook/index.lxp?lxpwrap=c13329%2ehtm#CREATINGANINDEX\n\nThanks for bringing up the question.\n\nVernon\n\n12/5/2002 12:35:36 PM, Tomasz Myrta <[email protected]> wrote:\n\n>Vernon Wu wrote:\n>\n>> The personid is a foreign key in the block table, and the the userid \n>> is the key of the profile table. So, both are indexed\n>> by nature (if I don't make a mistake).\n>\n>What kind of nature? Did you create indexes for these fields? Postgres \n>doesn't create indexes by itself - even if field is a primary key. You \n>have to do it on your own. I think also, that Postgres doesn't use index \n>for tables having less then 200 rows - sequence scan is faster.\n>Regards,\n>Tomasz Myrta\n>\n>\n>\n\n\n\n", "msg_date": "Thu, 05 Dec 2002 12:58:13 -0800", "msg_from": "Vernon Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is a better way to have the same result of this" }, { "msg_contents": "Vernon Wu wrote:\n\n> Tomasz,\n>\n> I am under the impression that a primary key field is automically as a \n> unique index. It seems to be correct after I verify it\n> with the page \n> http://www.commandprompt.com/ppbook/index.lxp?lxpwrap=c13329%2ehtm#CREATINGANINDEX\n\nYou are right. Primary key creates unique index. I use sometimes Pgadmin \nand it doesn't show these indexes :-(\nTomasz Myrta\n\n", "msg_date": "Thu, 05 Dec 2002 22:19:58 +0100", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is a better way to have the same result of this" } ]
[ { "msg_contents": "I wish to create an alter command which will allow a table to have OIDs\nadded or removed.\n\n\nThe tricky part appears to be changing the tuples themselves. I believe\nif I pull the same trick that cluster does (create new file, copy\ntuples, etc) it can be done fairly easily.\n\nFirst, set up pg_class appropriately (oid flag).\n\nSecond, copy out tuples from oldfile to newfile, running a\nheap_deformtuple() -> heap_formtuple() process on each. Since\nheap_deformtuple only deals with positive numbered attributes\n(non-system attributes) this should be safe to do on a mis-configured\nrelation. heap_formtuple completes the dirty work of setting up the OID\ncolumn appropriately.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "05 Dec 2002 23:11:45 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER TABLE .. < ADD | DROP > OIDS" }, { "msg_contents": "\nOK, patch applied and tested.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n-- Start of PGP signed section.\n> I wish to create an alter command which will allow a table to have OIDs\n> added or removed.\n> \n> \n> The tricky part appears to be changing the tuples themselves. I believe\n> if I pull the same trick that cluster does (create new file, copy\n> tuples, etc) it can be done fairly easily.\n> \n> First, set up pg_class appropriately (oid flag).\n> \n> Second, copy out tuples from oldfile to newfile, running a\n> heap_deformtuple() -> heap_formtuple() process on each. Since\n> heap_deformtuple only deals with positive numbered attributes\n> (non-system attributes) this should be safe to do on a mis-configured\n> relation. heap_formtuple completes the dirty work of setting up the OID\n> column appropriately.\n> \n> -- \n> Rod Taylor <[email protected]>\n> \n> PGP Key: http://www.rbt.ca/rbtpub.asc\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 6 Dec 2002 00:00:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, patch applied and tested.\n\nSorry, wrong email. I meant to say that his previous ALTER DOMAIN patch\nhad been applied with the new file now supplied.\n\n\n> \n> ---------------------------------------------------------------------------\n> \n> Rod Taylor wrote:\n> -- Start of PGP signed section.\n> > I wish to create an alter command which will allow a table to have OIDs\n> > added or removed.\n> > \n> > \n> > The tricky part appears to be changing the tuples themselves. I believe\n> > if I pull the same trick that cluster does (create new file, copy\n> > tuples, etc) it can be done fairly easily.\n> > \n> > First, set up pg_class appropriately (oid flag).\n> > \n> > Second, copy out tuples from oldfile to newfile, running a\n> > heap_deformtuple() -> heap_formtuple() process on each. Since\n> > heap_deformtuple only deals with positive numbered attributes\n> > (non-system attributes) this should be safe to do on a mis-configured\n> > relation. heap_formtuple completes the dirty work of setting up the OID\n> > column appropriately.\n> > \n> > -- \n> > Rod Taylor <[email protected]>\n> > \n> > PGP Key: http://www.rbt.ca/rbtpub.asc\n> -- End of PGP section, PGP failed!\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 6 Dec 2002 00:08:02 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> I wish to create an alter command which will allow a table to have OIDs\n> added or removed.\n\n> The tricky part appears to be changing the tuples themselves.\n\nAre you sure you need to? Methinks the lazy approach of letting them\nauto-adjust on next UPDATE should work as well for OIDs as for user\ncolumns.\n\nThere might be a few places that look at the pg_class.relhasoids\nfield where they should be examining the tuple header has-oid bit,\nbut I don't think there are many.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 15:19:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS " }, { "msg_contents": "On Fri, 2002-12-06 at 15:19, Tom Lane wrote:\n> Rod Taylor <[email protected]> writes:\n> > I wish to create an alter command which will allow a table to have OIDs\n> > added or removed.\n> \n> > The tricky part appears to be changing the tuples themselves.\n\n> There might be a few places that look at the pg_class.relhasoids\n> field where they should be examining the tuple header has-oid bit,\n> but I don't think there are many.\n\nOk.. If you think thats safe, I'll give it a try. I was afraid that the\nsystem would confuse itself if the table had mix and matched tuples in\nit. New tuples without oids, old tuples with.\n\nThat helps DROP OID. How about ADD OID?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "06 Dec 2002 15:35:32 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> Ok.. If you think thats safe, I'll give it a try. I was afraid that the\n> system would confuse itself if the table had mix and matched tuples in\n> it. New tuples without oids, old tuples with.\n\nManfred's original implementation would have failed (since it didn't\nhave a tuple-header hasoid bit). I think I got all the places that\nshould consult the header bit, but there may be some left; you'll need\nto test.\n\n> That helps DROP OID. How about ADD OID?\n\nWhat about it? I think it'll work just like adding a column, except\nthat OID will probably read as 0 not NULL if the row hasn't been updated\nyet. (You could probably make it read as NULL if you wanted though.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 16:05:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS " }, { "msg_contents": "> > That helps DROP OID. How about ADD OID?\n> \n> What about it? I think it'll work just like adding a column, except\n> that OID will probably read as 0 not NULL if the row hasn't been updated\n> yet. (You could probably make it read as NULL if you wanted though.)\n\nGood point. I forgot new columns were empty by default.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "06 Dec 2002 16:24:00 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE .. < ADD | DROP > OIDS" } ]
[ { "msg_contents": "\n\n\n> 1) Post an EXPLAIN ANALYZE statement for the above query, with limit,\n> that returns in 3-5 seconds.\n> 2) Post an EXPLAIN ANALYZE for a query that returns slowly (20-30\n> seconds).\n\nThe query:\nSELECT * FROM tblcompany WHERE lower(companyname) like 'a g m%' ORDER BY\ncompanyname;\nreturns 20 rows.\nIts EXPLAIN ANALYZE is as follows:\n\tNOTICE: QUERY PLAN:\n\n\tSort (cost=64196.18..64196.18 rows=6339 width=224) (actual\ntime=2274.64..2274.66 rows=20 loops=1)\n\t -> Seq Scan on tblcompany (cost=0.00..63795.86 rows=6339\nwidth=224) (actual time=1023.37..2274.41 rows=20 loops=1)\n\tTotal runtime: 2274.78 msec\n\nWhen limit is 19:\n\tEXPLAIN ANALYZE SELECT * FROM tblcompany WHERE\nlower(companyname) like 'a g m%' ORDER BY companyname LIMIT 19,0;\n\tNOTICE: QUERY PLAN:\n\n\tLimit (cost=0.00..4621.68 rows=19 width=223) (actual\ntime=561.20..563.11 rows=19 loops=1)\n\t -> Index Scan using idx_tblcompany_companyname on tblcompany\n(cost=0.00..1542006.83 rows=6339 width=223) (actual time=561.19..563.07\nrows=20 loops=1)\n\tTotal runtime: 563.22 msec\n\n\nBut when it is 20:\n\tEXPLAIN ANALYZE SELECT * FROM tblcompany WHERE\nlower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0;\n\tNOTICE: QUERY PLAN:\n\n\tLimit (cost=0.00..4864.92 rows=20 width=223) (actual\ntime=559.58..21895.02 rows=20 loops=1)\n\t -> Index Scan using idx_tblcompany_companyname on tblcompany\n(cost=0.00..1542006.83 rows=6339 width=223) (actual\ntime=559.57..21894.97 rows=20 loops=1)\n\tTotal runtime: 21895.13 msec\n\n\n\tAdmitedly the query without the limit has a different query plan\nbut the last two don't and yet vary wildly.\n\tJohn Cartmell\n\n\n", "msg_date": "Fri, 6 Dec 2002 11:32:04 -0000", "msg_from": "\"john cartmell\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY ... LIMIT.. performance" }, { "msg_contents": "John,\n\n> But when it is 20:\n> EXPLAIN ANALYZE SELECT * FROM tblcompany WHERE\n> lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..4864.92 rows=20 width=223) (actual\n> time=559.58..21895.02 rows=20 loops=1)\n> -> Index Scan using idx_tblcompany_companyname on tblcompany\n> (cost=0.00..1542006.83 rows=6339 width=223) (actual\n> time=559.57..21894.97 rows=20 loops=1)\n> Total runtime: 21895.13 msec\n\nThat's extremely odd. From the look of it, Postgres is taking an\nextra 18 seconds just to find that 20th row. \n\nDoes this table expereince very frequent deletions and updates, or\nperhaps mass record replacement from a file? Try running VACUUM FULL\nANALYZE, and possibly even REINDEX on idx_tblcompany_companyname.\n Massive numbers of dead tuples could account for this performance\nirregularity.\n\n-Josh\n", "msg_date": "Fri, 06 Dec 2002 10:13:40 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT.. performance" }, { "msg_contents": "\"john cartmell\" <[email protected]> writes:\n> The query:\n> SELECT * FROM tblcompany WHERE lower(companyname) like 'a g m%' ORDER BY\n> companyname;\n> returns 20 rows.\n ^^^^^^^^^^^^^^^\n\nAhh, light dawns.\n\n> When limit is 19:\n> \tEXPLAIN ANALYZE SELECT * FROM tblcompany WHERE\n> lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 19,0;\n> \tNOTICE: QUERY PLAN:\n\n> \tLimit (cost=0.00..4621.68 rows=19 width=223) (actual\n> time=561.20..563.11 rows=19 loops=1)\n> \t -> Index Scan using idx_tblcompany_companyname on tblcompany\n> (cost=0.00..1542006.83 rows=6339 width=223) (actual time=561.19..563.07\n> rows=20 loops=1)\n> \tTotal runtime: 563.22 msec\n\n> But when it is 20:\n> \tEXPLAIN ANALYZE SELECT * FROM tblcompany WHERE\n> lower(companyname) like 'a g m%' ORDER BY companyname LIMIT 20,0;\n> \tNOTICE: QUERY PLAN:\n\n> \tLimit (cost=0.00..4864.92 rows=20 width=223) (actual\n> time=559.58..21895.02 rows=20 loops=1)\n> \t -> Index Scan using idx_tblcompany_companyname on tblcompany\n> (cost=0.00..1542006.83 rows=6339 width=223) (actual\n> time=559.57..21894.97 rows=20 loops=1)\n> \tTotal runtime: 21895.13 msec\n\nThe problem here is that in current releases, the Limit plan node tries\nto fetch one more row than requested (you can see this in the actual\nrowcounts for the first example). So in your second example, the base\nindexscan is actually being run to completion before the Limit gives up.\nAnd since that scan is being used for ordering, not for implementing the\nWHERE clause, it visits all the rows. (When you leave off LIMIT, the\nplanner chooses a plan that's more amenable to fetching all the data...)\n\nI recently revised the Limit logic so that it doesn't fetch the extra\nrow. This takes more code, but you're not the first to complain of\nthe old behavior. It'll be in 7.4, or if you're brave you could\nprobably apply the diff to 7.3.\n\nIn the meantime, a more appropriate query would be\n\nSELECT * FROM tblcompany\nWHERE lower(companyname) like 'a g m%'\nORDER BY lower(companyname)\nLIMIT whatever\n\nso that an index on lower(companyname) could be used both for the WHERE\nclause and for the ordering.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 16:28:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT.. performance " } ]
[ { "msg_contents": "Folks,\n\nOne of Postgres' poorest performing areas is aggregates. This is the\nunfortunate side effect of our fully extensible aggregate and type\nsystem. However, I thought that the folks on this list might have a\nfew tips on making aggregates perform faster.\n\nHere's mine: Aggregate Caching Table\n\nThis is a brute-force approach. However, if you have a table with a\nmillion records for which users *frequently* ask for grand totals or\ncounts, it can work fine.\n\nA simple example:\n\nTable client_case_counts (\n\tclient_id INT NOT NULL REFERENCES clients(client_id) ON DELETE\nCASCADE;\n\tno_cases INT NOT NULL DEFAULT 0\n);\n\nThen create triggers:\n\nFunction tf_maintain_client_counts () \nreturns opaque as '\nBEGIN\nUPDATE client_case_counts SET no_cases = no_cases + 1\nWHERE client_id = NEW.client_id;\nINSERT INTO client_case_counts ( client_id, no_cases )\nVALUES ( NEW.client_id, 1 )\nWHERE NOT EXISTS (SELECT client_id FROM client_case_counts ccc2\n\tWHERE ccc2.client_id = NEW.client_id);\nRETURN NEW;\nEND;' LANGUAGE 'plpgsql';\n\nTrigger tg_maintain_client_counts ON INSERT INTO cases\nFOR EACH ROW EXECUTE tf_maintain_client_counts();\netc.\n\nWhile effective, this approach is costly in terms of update/insert\nprocessing. It is also limited to whatever aggregate requests you have\nanticipated ... it does no good for aggregates over a user-defined\nrange.\n\nWhat have other Postgres users done to speed up aggregates on large\ntables?\n\n-Josh Berkus\n\n\n\n\n\t\n", "msg_date": "Fri, 06 Dec 2002 09:30:46 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Speeding up aggregates" }, { "msg_contents": "Josh Berkus wrote:\n> While effective, this approach is costly in terms of update/insert\n> processing. It is also limited to whatever aggregate requests you have\n> anticipated ... it does no good for aggregates over a user-defined\n> range.\n\nI think this is where Oracle's materialized views come into play.\n\n> \n> What have other Postgres users done to speed up aggregates on large\n> tables?\n\nI've found that in most real life applications, expensive aggregate queries \ntend to be needed for management reporting, which does not need to be based on \nup-to-the-second fresh data. Normally for these types of reports a summary \nthrough say last night at midnight is perfectly adequate.\n\nThe simplest solution in these cases is to build a table to hold your \npartially or completely summarized data, then report off of that. Use cron to \nrefresh these summary tables at convenient times (daily, every 2 hours, or \nwhatever).\n\nJoe\n\n", "msg_date": "Fri, 06 Dec 2002 10:10:45 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n> What have other Postgres users done to speed up aggregates on large\n> tables?\n\nFWIW, I've implemented hashed aggregation in CVS tip. I have not had\nthe time to try to benchmark it, but I'd be interested if anyone can\nrun some tests on 7.4devel. Eliminating the need for a SORT step\nshould help aggregations over large datasets.\n\nNote that even though there's no SORT, the sort_mem setting is used\nto determine the allowable hashtable size, so a too-small sort_mem\nmight discourage the planner from selecting hashed aggregation.\nUse EXPLAIN to see which query plan gets chosen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 15:46:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates " }, { "msg_contents": "Tom Lane kirjutas L, 07.12.2002 kell 01:46:\n> \"Josh Berkus\" <[email protected]> writes:\n> > What have other Postgres users done to speed up aggregates on large\n> > tables?\n> \n> FWIW, I've implemented hashed aggregation in CVS tip.\n\nGreat! \n\nThis should also make it easier to implement all kinds of GROUP BY\nROLLUP|CUBE|GROUPING SETS|() queries.\n\nDo you have any near-term plans for doing them ?\n\n> I have not had\n> the time to try to benchmark it, but I'd be interested if anyone can\n> run some tests on 7.4devel. Eliminating the need for a SORT step\n> should help aggregations over large datasets.\n\nIs there a variable to set that would disable one or another, like we\ncurrently have for disabling various join strategies ?\n\n> Note that even though there's no SORT, the sort_mem setting is used\n> to determine the allowable hashtable size, so a too-small sort_mem\n> might discourage the planner from selecting hashed aggregation.\n\nDo you mean that hashed aggregation can't overflow to disk, or would it\njust be too slow ?\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "07 Dec 2002 02:32:06 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> This should also make it easier to implement all kinds of GROUP BY\n> ROLLUP|CUBE|GROUPING SETS|() queries.\n\n> Do you have any near-term plans for doing them ?\n\nNot me.\n\n> Is there a variable to set that would disable one or another, like we\n> currently have for disabling various join strategies ?\n\nenable_hashagg. I didn't think about one to prevent the old style.\n\n>> Note that even though there's no SORT, the sort_mem setting is used\n>> to determine the allowable hashtable size, so a too-small sort_mem\n>> might discourage the planner from selecting hashed aggregation.\n\n> Do you mean that hashed aggregation can't overflow to disk, or would it\n> just be too slow ?\n\nI didn't write any code to let it overflow to disk --- didn't seem\nlikely to be useful. (You're probably better off with a sort-based\naggregation if there are too many distinct grouping keys.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 16:42:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates " }, { "msg_contents": "Hannu Krosing kirjutas L, 07.12.2002 kell 02:32:\n> Tom Lane kirjutas L, 07.12.2002 kell 01:46:\n> > \"Josh Berkus\" <[email protected]> writes:\n> > > What have other Postgres users done to speed up aggregates on large\n> > > tables?\n> > \n> > FWIW, I've implemented hashed aggregation in CVS tip.\n> \n> Great! \n> \n> This should also make it easier to implement all kinds of GROUP BY\n> ROLLUP|CUBE|GROUPING SETS|() queries.\n\nOf these only ROLLUP can be done in one scan after sort, all others\nwould generally require several scans without hashing.\n\n\nI just noticed that we don't even have a TODO for this. I think this\nwould be a good TODO item.\n\nBruce, could you add:\n\n* Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY\n\n\nThey are all defined in SQL99 p.79 <group by clause>\n\n\nSome more background info (from a quick Google search)\n\na very short overview:\n http://www.neddo.com/dm3e/sql3&olap.html\n\n\nmore thorough guide for DB2:\nhttp://www.student.math.uwaterloo.ca/~cs448/db2_doc/html/db2s0/frame3.htm#db2s0279\n\n\n-----------------\nHannu\n\n", "msg_date": "07 Dec 2002 02:49:58 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Tom Lane kirjutas L, 07.12.2002 kell 02:42:\n> Hannu Krosing <[email protected]> writes:\n> > This should also make it easier to implement all kinds of GROUP BY\n> > ROLLUP|CUBE|GROUPING SETS|() queries.\n> \n> > Do you have any near-term plans for doing them ?\n> \n> Not me.\n\nI'll try to look into it then. \n\nNo promises about when it will be ready ;)\n\n> > Is there a variable to set that would disable one or another, like we\n> > currently have for disabling various join strategies ?\n> \n> enable_hashagg. I didn't think about one to prevent the old style.\n> \n> >> Note that even though there's no SORT, the sort_mem setting is used\n> >> to determine the allowable hashtable size, so a too-small sort_mem\n> >> might discourage the planner from selecting hashed aggregation.\n> \n> > Do you mean that hashed aggregation can't overflow to disk, or would it\n> > just be too slow ?\n> \n> I didn't write any code to let it overflow to disk --- didn't seem\n> likely to be useful. (You're probably better off with a sort-based\n> aggregation if there are too many distinct grouping keys.)\n\nFor simple GROUP BY this is most likely so, but for CUBE or GROUPING SETS \nit may still be faster to overflow to disk than to do N passes over data \ndifferent ordering. \n\nOf course we could use a combined approach here - do it the old way (sort) for \nmain body + run a parallel hashed aggregation for other, out of order groups.\n\n------------\nHannu\n\n\n", "msg_date": "07 Dec 2002 02:55:44 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Tom Lane kirjutas L, 07.12.2002 kell 02:42:\n> Hannu Krosing <[email protected]> writes:\n> > Is there a variable to set that would disable one or another, like we\n> > currently have for disabling various join strategies ?\n> \n> enable_hashagg. I didn't think about one to prevent the old style.\n\ncould be handy for testing.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "07 Dec 2002 02:57:31 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "\nTom,\n\n> FWIW, I've implemented hashed aggregation in CVS tip. I have not had\n> the time to try to benchmark it, but I'd be interested if anyone can\n> run some tests on 7.4devel. Eliminating the need for a SORT step\n> should help aggregations over large datasets.\n\nI'd love to, but I am still too much of a tyro to build Postgres from CVS. \nAs soon as there's a tarball of 7.4devel, I'll build it and run comparisons.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 6 Dec 2002 13:58:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> I'd love to, but I am still too much of a tyro to build Postgres from CVS. \n> As soon as there's a tarball of 7.4devel, I'll build it and run comparisons.\n\nThere should be a nightly snapshot tarball on the FTP server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Dec 2002 18:07:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates " }, { "msg_contents": "\nTom,\n\nWe have a winner on simple aggregates:\n\nVersion 7.2.3:\n explain analyze select client_id, count(*) from case_clients group by \nclient_id;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=11892.51..12435.75 rows=10865 width=4) (actual \ntime=1162.27..1569.40 rows=436 loops=1)\n -> Group (cost=11892.51..12164.13 rows=108648 width=4) (actual \ntime=1162.24..1477.70 rows=108648 loops=1)\n -> Sort (cost=11892.51..11892.51 rows=108648 width=4) (actual \ntime=1162.22..1280.64 rows=108648 loops=1)\n -> Seq Scan on case_clients (cost=0.00..2804.48 rows=108648 \nwidth=4) (actual time=0.07..283.14 rows=108648 loops=1)\nTotal runtime: 2387.87 msec\n\nVersus Version 7.4devel:\nexplain analyze select client_id, count(*) from case_clients group by \nclient_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3289.72..3289.84 rows=46 width=4) (actual \ntime=447.80..448.71 rows=436 loops=1)\n -> Seq Scan on case_clients (cost=0.00..2746.48 rows=108648 width=4) \n(actual time=0.08..267.45 rows=108648 loops=1)\n Total runtime: 473.77 msec\n(3 rows)\n\nHowever, more complex queries involving aggregates seem to be unable to make \nuse of the hashaggregate. I'll get back to you when I know what the \nbreakpoint is.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 6 Dec 2002 17:54:43 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "On Fri, 2002-12-06 at 14:46, Tom Lane wrote:\n> \"Josh Berkus\" <[email protected]> writes:\n> > What have other Postgres users done to speed up aggregates on large\n> > tables?\n> \n> FWIW, I've implemented hashed aggregation in CVS tip. I have not had\n> the time to try to benchmark it, but I'd be interested if anyone can\n> run some tests on 7.4devel. Eliminating the need for a SORT step\n> should help aggregations over large datasets.\n> \n> Note that even though there's no SORT, the sort_mem setting is used\n> to determine the allowable hashtable size, so a too-small sort_mem\n> might discourage the planner from selecting hashed aggregation.\n> Use EXPLAIN to see which query plan gets chosen.\n\nHi.\n\nWhat exactly is \"hashed aggregation\"?\n\n>From Josh Berkus' email with the EXPLAIN data, it still looks like\nsupporting indexes aren't used, so are you still scanning the table?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "07 Dec 2002 09:48:36 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Tom Lane wrote:\n> FWIW, I've implemented hashed aggregation in CVS tip. I have not had\n> the time to try to benchmark it, but I'd be interested if anyone can\n> run some tests on 7.4devel. Eliminating the need for a SORT step\n> should help aggregations over large datasets.\n> \n> Note that even though there's no SORT, the sort_mem setting is used\n> to determine the allowable hashtable size, so a too-small sort_mem\n> might discourage the planner from selecting hashed aggregation.\n> Use EXPLAIN to see which query plan gets chosen.\n> \n\nHere's some tests on a reasonable sized (and real life as opposed to \ncontrived) dataset:\n\nparts=# set enable_hashagg to off;\nSET\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=11111.93..11744.90 rows=35528 width=36) (actual \ntime=2799.40..3140.17 rows=34575 loops=1)\n -> Sort (cost=11111.93..11293.31 rows=72553 width=36) (actual \ntime=2799.35..2896.43 rows=72548 loops=1)\n Sort Key: i.part_id\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \ntime=157.72..1231.01 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 \nwidth=22) (actual time=0.01..286.80 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=157.50..157.50 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.02..88.00 rows=35528 loops=1)\n Total runtime: 3168.73 msec\n(9 rows)\n\nparts=# set enable_hashagg to on;\nSET\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5617.22..5706.04 rows=35528 width=36) (actual \ntime=1507.89..1608.32 rows=34575 loops=1)\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \ntime=153.46..1231.34 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 width=22) \n(actual time=0.01..274.74 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=153.21..153.21 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.03..84.67 rows=35528 loops=1)\n Total runtime: 1661.53 msec\n(7 rows)\n\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=11111.93..12015.10 rows=35528 width=36) (actual \ntime=2823.65..3263.16 rows=4189 loops=1)\n Filter: (sum(qty_oh) > 0::double precision)\n -> Sort (cost=11111.93..11293.31 rows=72553 width=36) (actual \ntime=2823.40..2926.07 rows=72548 loops=1)\n Sort Key: i.part_id\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \ntime=156.39..1240.61 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 \nwidth=22) (actual time=0.01..290.47 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=156.16..156.16 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.02..86.95 rows=35528 loops=1)\n Total runtime: 3282.27 msec\n(10 rows)\n\n\nNote that similar to Josh, I saw a nice improvement when using the \nHashAggregate on the simpler case, but as soon as I added a HAVING clause the \noptimizer switched back to GroupAggregate.\n\nI'll try to play around with this a bit more later today.\n\nJoe\n\n", "msg_date": "Sun, 08 Dec 2002 11:31:54 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Tom Lane wrote:\n> Note that even though there's no SORT, the sort_mem setting is used\n> to determine the allowable hashtable size, so a too-small sort_mem\n> might discourage the planner from selecting hashed aggregation.\n> Use EXPLAIN to see which query plan gets chosen.\n> \n\nJust to follow up on my last post, I did indeed find that bumping up sort_mem \ncaused a switch back to HashAggregate, and a big improvement:\n\nparts=# show sort_mem ;\n sort_mem\n----------\n 8192\n(1 row)\n\nparts=# set sort_mem to 32000;\nSET\nparts=# show sort_mem ;\n sort_mem\n----------\n 32000\n(1 row)\n\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5254.46..5432.10 rows=35528 width=36) (actual \ntime=1286.89..1399.36 rows=4189 loops=1)\n Filter: (sum(qty_oh) > 0::double precision)\n -> Hash Join (cost=1319.10..4710.31 rows=72553 width=36) (actual \ntime=163.36..947.54 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 width=22) \n(actual time=0.01..266.20 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=162.70..162.70 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.04..88.98 rows=35528 loops=1)\n Total runtime: 1443.93 msec\n(8 rows)\n\nparts=# set sort_mem to 8192;\nSET\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=11111.93..12015.10 rows=35528 width=36) (actual \ntime=2836.98..3261.66 rows=4189 loops=1)\n Filter: (sum(qty_oh) > 0::double precision)\n -> Sort (cost=11111.93..11293.31 rows=72553 width=36) (actual \ntime=2836.73..2937.78 rows=72548 loops=1)\n Sort Key: i.part_id\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \ntime=155.42..1258.40 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 \nwidth=22) (actual time=0.01..308.57 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=155.19..155.19 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.02..86.82 rows=35528 loops=1)\n Total runtime: 3281.75 msec\n(10 rows)\n\n\nSo when it gets used, HashAggregate has provided a factor of two improvement \non this test case at least. Nice work, Tom!\n\nJoe\n\n", "msg_date": "Sun, 08 Dec 2002 11:37:47 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "On Sun, 2002-12-08 at 19:31, Joe Conway wrote:\n\n> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \n> i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n> QUERY PLAN \n...\n> Total runtime: 3282.27 msec\n> (10 rows)\n> \n> \n> Note that similar to Josh, I saw a nice improvement when using the \n> HashAggregate on the simpler case, but as soon as I added a HAVING clause the \n> optimizer switched back to GroupAggregate.\n> \n> I'll try to play around with this a bit more later today.\n\nTry turning the having into subquery + where:\n\nexplain analyze\nselect * from (\n select i.part_id, sum(w.qty_oh) as total_oh\n from inv i, iwhs w\n where i.part_id = w.part_id\n group by i.part_id) sub\nwhere total_oh > 0;\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "09 Dec 2002 10:16:01 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "\nAdded.\n\n---------------------------------------------------------------------------\n\nHannu Krosing wrote:\n> Hannu Krosing kirjutas L, 07.12.2002 kell 02:32:\n> > Tom Lane kirjutas L, 07.12.2002 kell 01:46:\n> > > \"Josh Berkus\" <[email protected]> writes:\n> > > > What have other Postgres users done to speed up aggregates on large\n> > > > tables?\n> > > \n> > > FWIW, I've implemented hashed aggregation in CVS tip.\n> > \n> > Great! \n> > \n> > This should also make it easier to implement all kinds of GROUP BY\n> > ROLLUP|CUBE|GROUPING SETS|() queries.\n> \n> Of these only ROLLUP can be done in one scan after sort, all others\n> would generally require several scans without hashing.\n> \n> \n> I just noticed that we don't even have a TODO for this. I think this\n> would be a good TODO item.\n> \n> Bruce, could you add:\n> \n> * Add ROLLUP, CUBE, GROUPING SETS options to GROUP BY\n> \n> \n> They are all defined in SQL99 p.79 <group by clause>\n> \n> \n> Some more background info (from a quick Google search)\n> \n> a very short overview:\n> http://www.neddo.com/dm3e/sql3&olap.html\n> \n> \n> more thorough guide for DB2:\n> http://www.student.math.uwaterloo.ca/~cs448/db2_doc/html/db2s0/frame3.htm#db2s0279\n> \n> \n> -----------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 9 Dec 2002 13:09:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Hannu Krosing wrote:\n> On Sun, 2002-12-08 at 19:31, Joe Conway wrote:\n> \n> \n>>parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \n>>i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n>> QUERY PLAN \n> \n> ...\n> \n>> Total runtime: 3282.27 msec\n>>(10 rows)\n>>\n>>\n>>Note that similar to Josh, I saw a nice improvement when using the \n>>HashAggregate on the simpler case, but as soon as I added a HAVING clause the \n>>optimizer switched back to GroupAggregate.\n>>\n>>I'll try to play around with this a bit more later today.\n> \n> \n> Try turning the having into subquery + where:\n> \n> explain analyze\n> select * from (\n> select i.part_id, sum(w.qty_oh) as total_oh\n> from inv i, iwhs w\n> where i.part_id = w.part_id\n> group by i.part_id) sub\n> where total_oh > 0;\n> \n\nPretty much the same result. See below.\n\nJoe\n\n======================================\nparts=# set sort_mem to 8000;\nSET\nparts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as \ntotal_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id) \nsub where total_oh > 0;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan sub (cost=11111.93..12015.10 rows=35528 width=36) (actual \ntime=2779.16..3212.46 rows=4189 loops=1)\n -> GroupAggregate (cost=11111.93..12015.10 rows=35528 width=36) (actual \ntime=2779.15..3202.97 rows=4189 loops=1)\n Filter: (sum(qty_oh) > 0::double precision)\n -> Sort (cost=11111.93..11293.31 rows=72553 width=36) (actual \ntime=2778.90..2878.33 rows=72548 loops=1)\n Sort Key: i.part_id\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) \n(actual time=155.80..1235.32 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 \nwidth=22) (actual time=0.01..282.38 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) \n(actual time=155.56..155.56 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 \nrows=35528 width=14) (actual time=0.02..86.69 rows=35528 loops=1)\n Total runtime: 3232.84 msec\n(11 rows)\n\nparts=# set sort_mem to 12000;\nSET\nparts=# explain analyze select * from (select i.part_id, sum(w.qty_oh) as \ntotal_oh from inv i, iwhs w where i.part_id = w.part_id group by i.part_id) \nsub where total_oh > 0;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan sub (cost=5617.22..5794.86 rows=35528 width=36) (actual \ntime=1439.24..1565.47 rows=4189 loops=1)\n -> HashAggregate (cost=5617.22..5794.86 rows=35528 width=36) (actual \ntime=1439.23..1555.65 rows=4189 loops=1)\n Filter: (sum(qty_oh) > 0::double precision)\n -> Hash Join (cost=1319.10..5073.07 rows=72553 width=36) (actual \ntime=159.39..1098.30 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 \nwidth=22) (actual time=0.01..259.48 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=159.11..159.11 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.03..87.74 rows=35528 loops=1)\n Total runtime: 1609.91 msec\n(9 rows)\n\n\n", "msg_date": "Mon, 09 Dec 2002 13:04:18 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> Just to follow up on my last post, I did indeed find that bumping up sort_mem\n> caused a switch back to HashAggregate, and a big improvement:\n\n> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \n> i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=5254.46..5432.10 rows=35528 width=36) (actual \n> time=1286.89..1399.36 rows=4189 loops=1)\n> Filter: (sum(qty_oh) > 0::double precision)\n> -> Hash Join (cost=1319.10..4710.31 rows=72553 width=36) (actual \n> time=163.36..947.54 rows=72548 loops=1)\n\nHow many rows out if you drop the HAVING clause?\n\nThe planner's choice of which to use is dependent on its estimate of the\nrequired hashtable size, which is proportional to its guess about how\nmany distinct groups there will be. The above output doesn't tell us\nthat however, only how many groups passed the HAVING clause. I'm\ncurious about the quality of this estimate, since the code to try to\ngenerate not-completely-bogus group count estimates is all new ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Dec 2002 16:26:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <[email protected]> writes:\n> \n>>Just to follow up on my last post, I did indeed find that bumping up sort_mem\n>>caused a switch back to HashAggregate, and a big improvement:\n> \n> \n>>parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \n>>i, iwhs w where i.part_id = w.part_id group by i.part_id having sum(w.qty_oh) > 0;\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate (cost=5254.46..5432.10 rows=35528 width=36) (actual \n>>time=1286.89..1399.36 rows=4189 loops=1)\n>> Filter: (sum(qty_oh) > 0::double precision)\n>> -> Hash Join (cost=1319.10..4710.31 rows=72553 width=36) (actual \n>>time=163.36..947.54 rows=72548 loops=1)\n> \n> \n> How many rows out if you drop the HAVING clause?\n\nparts=# set sort_mem to 8000;\nSET\nparts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \ni, iwhs w where i.part_id = w.part_id group by i.part_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5617.22..5706.04 rows=35528 width=36) (actual \ntime=1525.93..1627.41 rows=34575 loops=1)\n -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \ntime=156.86..1248.73 rows=72548 loops=1)\n Hash Cond: (\"outer\".part_id = \"inner\".part_id)\n -> Seq Scan on iwhs w (cost=0.00..2121.53 rows=72553 width=22) \n(actual time=0.01..274.00 rows=72553 loops=1)\n -> Hash (cost=1230.28..1230.28 rows=35528 width=14) (actual \ntime=156.65..156.65 rows=0 loops=1)\n -> Seq Scan on inv i (cost=0.00..1230.28 rows=35528 \nwidth=14) (actual time=0.03..86.86 rows=35528 loops=1)\n Total runtime: 1680.86 msec\n(7 rows)\n\n\n> The planner's choice of which to use is dependent on its estimate of the\n> required hashtable size, which is proportional to its guess about how\n> many distinct groups there will be. The above output doesn't tell us\n> that however, only how many groups passed the HAVING clause. I'm\n> curious about the quality of this estimate, since the code to try to\n> generate not-completely-bogus group count estimates is all new ...\n\nIf I'm reading it correctly, it looks like the estimate in this case is pretty \ngood.\n\nJoe\n\n\n", "msg_date": "Mon, 09 Dec 2002 13:48:21 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n>> How many rows out if you drop the HAVING clause?\n\n> parts=# set sort_mem to 8000;\n> SET\n> parts=# explain analyze select i.part_id, sum(w.qty_oh) as total_oh from inv \n> i, iwhs w where i.part_id = w.part_id group by i.part_id;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=5617.22..5706.04 rows=35528 width=36) (actual \n> time=1525.93..1627.41 rows=34575 loops=1)\n> -> Hash Join (cost=1319.10..5254.45 rows=72553 width=36) (actual \n> time=156.86..1248.73 rows=72548 loops=1)\n\n\n>> The planner's choice of which to use is dependent on its estimate of the\n>> required hashtable size, which is proportional to its guess about how\n>> many distinct groups there will be. The above output doesn't tell us\n>> that however, only how many groups passed the HAVING clause. I'm\n>> curious about the quality of this estimate, since the code to try to\n>> generate not-completely-bogus group count estimates is all new ...\n\n> If I'm reading it correctly, it looks like the estimate in this case is pretty \n> good.\n\nBetter than I had any right to expect ;-). Thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Dec 2002 17:06:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speeding up aggregates " } ]
[ { "msg_contents": "\nSilly question (and just because I would like to know exactly why):\n\nThis query:\nselect distinct x, y \n from table1 t \n join table2 t2\n using (col1) \norder by x;\n\nis *slower* than this query:\n\nselect disting x, y \n from table1 \n where col1 = (select col1 from table2) \nORDER BY x;\n\nIs this because in the latter case the select col1 is cached?\n\nOoo, I would love to have a web page full of these tidbits (along with how \nto get around the max and min aggregates and why as an example..., etc.)!\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nThere is more to life than just SQL.\n\n\n", "msg_date": "Fri, 6 Dec 2002 15:32:01 -0800 (PST)", "msg_from": "Laurette Cisneros <[email protected]>", "msg_from_op": true, "msg_subject": "query question" }, { "msg_contents": "\nLaurette,\n\n> This query:\n> select distinct x, y \n> from table1 t \n> join table2 t2\n> using (col1) \n> order by x;\n> \n> is *slower* than this query:\n> \n> select disting x, y \n> from table1 \n> where col1 = (select col1 from table2) \n> ORDER BY x;\n> \n> Is this because in the latter case the select col1 is cached?\n\nYes. For all of the following structures:\n\nwhere x = (select col from table)\nwhere x IN (select col from table)\nwhere x NOT IN (select col from table)\nwhere x != ANY(select col from table)\netc.,\n\n... Postgres must process the full subquery, return the results, and compare \nall of the results as individual values against the reference column. \n\nHowever, if you re-wrote the query as:\n\n select distint x, y \n from table1 \n where EXISTS (select col1 from table2\n\twhere table2.col1 = table1.col1)\n ORDER BY x;\n\n... then Postgres would be able to use JOIN optimizations to evaluate the \nsubquery and pull a subset of relevant records or even use an index, making \nthe query *much* faster.\n\n> Ooo, I would love to have a web page full of these tidbits (along with how \n> to get around the max and min aggregates and why as an example..., etc.)!\n\nUm:\n\nhttp://techdocs.postgresql.org/guides/\n\nAdd your own Wiki page!\n\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 6 Dec 2002 16:38:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query question" } ]
[ { "msg_contents": "Greetings!\n\nI am trying to find a way to optimize this query and have hit a wall. The\ndatabase size is 2.9 GB and contains 1 million records. The system is a\ndual xeon 1 ghz P3 with 4 GB ram, 2 of it shared memory. Redhat linux\nkernel 2.4.18-5 ext3fs.\n\nI'm hoping I haven't hit the limit of the hardware or os but here's all\nthe relevant info. Questions, comments, solutions would be greatly\nappreciated.\n\n11696 postgres 25 0 1084M 1.1G 562M R 99.9 28.6 2:36 postmaster\n\nPostgresql.conf settings\nshared_buffers = 250000\nsort_mem = 1048576 # min 32\nvacuum_mem = 128000 # min 1024\nwal_files = 64 # range 0-64\nenable_seqscan = false\nenable_indexscan = true\nenable_tidscan = true\nenable_sort = true\nenable_nestloop = true\nenable_mergejoin = true\nenable_hashjoin = true\n\n[postgres@db1 base]$ cat /proc/sys/kernel/shmmax\n2192000000\n\ndatabase=# explain analyze SELECT active,registrant,name FROM person WHERE\nobject.active = 1 AND object.registrant = 't' ORDER BY UPPER(object.name)\nDESC LIMIT 10 OFFSET 0;\nNOTICE: QUERY PLAN:\n\nLimit (cost=nan..nan rows=10 width=2017) (actual\ntime=204790.82..204790.84 rows=10 loops=1)\n -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\ntime=204790.81..204790.82 rows=11 loops=1)\n -> Index Scan using registrant__object__idx on object \n(cost=0.00..81733.63 rows=1032953 width=2017) (actual\ntime=0.14..94509.14 rows=1032946 loops=1)\nTotal runtime: 205125.75 msec\n\nNOTICE: QUERY PLAN:\n\nLimit (cost=nan..nan rows=10 width=2017) (actual\ntime=204790.82..204790.84 rows=10 loops=1)\n -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\ntime=204790.81..204790.82 rows=11 loops=1)\n -> Index Scan using registrant__object__idx on object \n(cost=0.00..81733.63 rows=1032953 width=2017) (actual\ntime=0.14..94509.14 rows=1032946 loops=1)\nTotal runtime: 205125.75 msec\n\n\n", "msg_date": "Fri, 6 Dec 2002 18:16:43 -0800 (PST)", "msg_from": "\"Fred Moyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query optimization" }, { "msg_contents": "On Saturday 07 Dec 2002 2:16 am, Fred Moyer wrote:\n>\n> database=# explain analyze SELECT active,registrant,name FROM person WHERE\n> object.active = 1 AND object.registrant = 't' ORDER BY UPPER(object.name)\n> DESC LIMIT 10 OFFSET 0;\n> NOTICE: QUERY PLAN:\n\nWhat's the connection between \"person\" and \"object\"? Looks like an \nunconstrained join from here. Schema and count(*) for both and details of \nindexes would be useful.\n\n> Limit (cost=nan..nan rows=10 width=2017) (actual\n ^^^^^^^^\nNever seen this \"nan\" before - presumably Not A Number, but I don't know why \nthe planner generates it\n\n> time=204790.82..204790.84 rows=10 loops=1)\n> -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\n> time=204790.81..204790.82 rows=11 loops=1)\n> -> Index Scan using registrant__object__idx on object\n> (cost=0.00..81733.63 rows=1032953 width=2017) (actual\n> time=0.14..94509.14 rows=1032946 loops=1)\n> Total runtime: 205125.75 msec\n\nWithout seeing schema details difficult to suggest much. If it's this \nparticular query that's the problem you might try a partial index\n\nCREATE INDEX foo_object_idx ON object (upper(object.name)) WHERE active=1 AND \nregistrant='t';\n\nSee CREATE INDEX in the manuals for details.\n\n-- \n Richard Huxton\n", "msg_date": "Sat, 7 Dec 2002 17:13:08 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization" }, { "msg_contents": "Ikes, they are the same, a cut and paste error. Sorry about that. No\njoins involved, one table with 1 million records, about 255 rows, only\nabout 10% of the rows contain data in this particular instance.\n\nobject is indexed on active, registrant, and name as well as UPPER(name).\nPostgres version is 7.2.3\n\nHere is the relevant table info (some schema details omitted for brevity)\n\nid | numeric(10,0) | not null default\nnextval('seq_object'\n::text)\nname | character varying(64) |\nregistrant | boolean |\nactive | numeric(1,0) | not null default 1\n\nregistrant__object__idx\nactive__object__idx,\nname__object__idx,\nupper_name__object__idx,\nid__object__idx,\nPrimary key: pk_object__id\n\ndb=# select count(*) from count;\n count\n---------\n 1032953\n(1 row)\n\ndb=# explain analyze select count(*) from object;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=100073270.91..100073270.91 rows=1 width=0) (actual\ntime=3085.51..3085.51 rows=1 loops=1)\n -> Seq Scan on object (cost=100000000.00..100070688.53 rows=1032953\nwidth=0) (actual time=0.01..2008.51 rows=1032953 loops=1)\nTotal runtime: 3085.62 msec\n\nEXPLAIN\n\n> On Saturday 07 Dec 2002 2:16 am, Fred Moyer wrote:\n>>\n>> database=# explain analyze SELECT active,registrant,name FROM object\n>> WHERE object.active = 1 AND object.registrant = 't' ORDER BY\n>> UPPER(object.name) DESC LIMIT 10 OFFSET 0;\n>> NOTICE: QUERY PLAN:\n>\n> What's the connection between \"person\" and \"object\"? Looks like an\n> unconstrained join from here. Schema and count(*) for both and details\n> of indexes would be useful.\n>\n>> Limit (cost=nan..nan rows=10 width=2017) (actual\n> ^^^^^^^^\n> Never seen this \"nan\" before - presumably Not A Number, but I don't know\n> why the planner generates it\n>\n>> time=204790.82..204790.84 rows=10 loops=1)\n>> -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\n>> time=204790.81..204790.82 rows=11 loops=1)\n>> -> Index Scan using registrant__object__idx on object\n>> (cost=0.00..81733.63 rows=1032953 width=2017) (actual\n>> time=0.14..94509.14 rows=1032946 loops=1)\n>> Total runtime: 205125.75 msec\n>\n> Without seeing schema details difficult to suggest much. If it's this\n> particular query that's the problem you might try a partial index\n>\n> CREATE INDEX foo_object_idx ON object (upper(object.name)) WHERE\n> active=1 AND registrant='t';\n>\n> See CREATE INDEX in the manuals for details.\n>\n> --\n> Richard Huxton\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\nFred Moyer\nDigital Campaigns, Inc.\n\n\n", "msg_date": "Sat, 7 Dec 2002 12:10:41 -0800 (PST)", "msg_from": "\"Fred Moyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query optimization" }, { "msg_contents": "Fred Moyer wrote:\n> \n> I am trying to find a way to optimize this query and have hit a wall. The\n> database size is 2.9 GB and contains 1 million records.\n\n> Postgresql.conf settings\n> shared_buffers = 250000\n\nThis looks awfull high to me. 25000 might be better to give more room to \nthe OS disk-caching. Bit of a waste if PostgreSQL and the OS start \ncaching exactly the same blocks.\nTrying is the only way to find a good setting.\n\n\n> sort_mem = 1048576 # min 32\n> vacuum_mem = 128000 # min 1024\n> wal_files = 64 # range 0-64\n> enable_seqscan = false\n\nWhy disable seqscan? For any query that is not particularly selective \nthis will mean a performance hit.\n\n\n> enable_indexscan = true\n> enable_tidscan = true\n> enable_sort = true\n> enable_nestloop = true\n> enable_mergejoin = true\n> enable_hashjoin = true\n\n> database=# explain analyze SELECT active,registrant,name FROM person WHERE\n> object.active = 1 AND object.registrant = 't' ORDER BY UPPER(object.name)\n> DESC LIMIT 10 OFFSET 0;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=nan..nan rows=10 width=2017) (actual\n> time=204790.82..204790.84 rows=10 loops=1)\n> -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\n> time=204790.81..204790.82 rows=11 loops=1)\n> -> Index Scan using registrant__object__idx on object \n> (cost=0.00..81733.63 rows=1032953 width=2017) (actual\n> time=0.14..94509.14 rows=1032946 loops=1)\n> Total runtime: 205125.75 msec\n\nI think this is an example of a not particularly selective query. If I \nread it correctly, pretty much every row satisfies the predicates\nobject.active = 1 AND object.registrant = 't' (how much do not satisfy \nthese predicates?).\n\nJochem\n\n", "msg_date": "Sat, 07 Dec 2002 21:41:57 +0100", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization" }, { "msg_contents": "On Saturday 07 Dec 2002 8:10 pm, Fred Moyer wrote:\n> Ikes, they are the same, a cut and paste error. Sorry about that. No\n> joins involved, one table with 1 million records, about 255 rows, only\n> about 10% of the rows contain data in this particular instance.\n>\n> object is indexed on active, registrant, and name as well as UPPER(name).\n> Postgres version is 7.2.3\n\nI think Jochem's got it with \"enable_seqscan\" - you've disabled scans so the \nplanner is checking one million index entries - bad idea. Try Jochem's \nsuggestion of re-enabling seqscan and see if that helps things along.\n\n> db=# select count(*) from count;\n> count\n> ---------\n> 1032953\n\n> >> time=204790.82..204790.84 rows=10 loops=1)\n> >> -> Sort (cost=nan..nan rows=1032953 width=2017) (actual\n> >> time=204790.81..204790.82 rows=11 loops=1)\n> >> -> Index Scan using registrant__object__idx on object\n> >> (cost=0.00..81733.63 rows=1032953 width=2017) (actual\n> >> time=0.14..94509.14 rows=1032946 loops=1)\n> >> Total runtime: 205125.75 msec\n\n-- \n Richard Huxton\n", "msg_date": "Sun, 8 Dec 2002 13:48:44 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization" } ]
[ { "msg_contents": "Hi\n\nI am doing a research project on real time robotics and wanted to have the postgres as a database to save measurements we aggregate when running our robots.\n\nA common operation we do is insertion and movements of measurements within tables. But it seams as if insertion and movement times are correlated to the size of the database. Can this be possible? IE inserting into a large database takes longer time than into a small database.\n\nI'd be grateful for comments on the reason for this.\n\nCarl Barck-Holst\n\n\nCarl och Josefine Barck-Holst\nÖstermalms g 84\n11450 Stockholm\n08 6679904\nCarl 070-2642506\nJosefine 073-9648103\n\n", "msg_date": "Mon, 9 Dec 2002 07:30:48 +0100", "msg_from": "\"Kalle Barck-Holst\" <[email protected]>", "msg_from_op": true, "msg_subject": "is insertion and movement times are correlated to the size of the\n\tdatabase?" }, { "msg_contents": "On Mon, 2002-12-09 at 01:30, Kalle Barck-Holst wrote:\n> Hi\n> \n> I am doing a research project on real time robotics and wanted to have the postgres as a database to save measurements we aggregate when running our robots.\n> \n> A common operation we do is insertion and movements of measurements within tables. But it seams as if insertion and movement times are correlated to the size of the database. Can this be possible? IE inserting into a large database takes longer time than into a small database.\n\nIf there are any indexes or constraints, then definitely. The insert\nreally doesn't have additional overhead for the size of the DB, but the\nwork involved for most constraints and indexes do.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "09 Dec 2002 08:04:58 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is insertion and movement times are correlated to" } ]
[ { "msg_contents": "hi\ni have a question about best harddisk configuration for postgresql\nperformance.\nof course i know that:\n- scsi is better than ide\n- 2 disks are better than 1\n- 3 disks are better than 2\n\ni know that with 3 disks one should move xlog to one drive, index files\nto second and tables to third.\nthat's clear.\n\nbut:\nwill making software raid on this discs provide performance increase or\ndecrease?\nwhich raid (0,1,5,10?) is best for postgresql? maybe it differs when it\ncomes to different datatypes (i.e. raid \"X\" is best for indices, but \"Y\"\nbest for tables).\n\ni'd like to know what are the options to store all this information\n(xlog, indices and tables). what configurations are best, what medium\nand what should be avoided at all cost.\n\nhope you can help me, and sorry for my english.\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\nMďż˝j Boďż˝e, spraw abym milczaďż˝, dopďż˝ki siďż˝ nie upewniďż˝, ďż˝e naprawdďż˝ mam\ncoďż˝ do powiedzenia. (c) 1998 depesz\n\n", "msg_date": "Mon, 9 Dec 2002 12:32:31 +0100", "msg_from": "Hubert depesz Lubaczewski <[email protected]>", "msg_from_op": true, "msg_subject": "questions about disk configurations" }, { "msg_contents": "On Mon, 2002-12-09 at 12:32, Hubert depesz Lubaczewski wrote:\n> hi\n> i have a question about best harddisk configuration for postgresql\n> performance.\n[...]\n\nYo!\n\nA bit more data is needed before anybody can give you more help:\n - what is your budget?\n - how big will your databases be?\n - what's the read/write ratio?\n\nEven then you'll not get any good recipes, because there aren't any.\nYou'll have to do benchmarks yourself. A few fundamental things that are\nprobably true for most:\n\n - more RAM is always good. Independent from the disc architecture - if\nan access isn't going to the disc at all, it's always good. (if you're\nmostly writing this may be lessened).\n - always carefully tune the postgres installation (random page cost,\nsort mem, shared buffers, ... - all depend on your system and you\napplication)\n - as you correctly said: distribute the load on many spindles. On a\nbusy database, 4*20G is probably faster than 1*80G\n\nbeyound this, experiences vary. RAID1 and RAID5 are rated differently by\ndifferent people - and especially with RAID5 there are (I think) really\nperformance differencies between the various products. RAID0 is fastest,\nof course, but you probably care for your data.\n\nFor equally good implementations, RAID1 and RAID5 may have similar\nspeed, especially if the RAID controller for RAID5 has enough RAM. If\nthe active dataset on a RAID5 is bigger than the available caching RAM,\nwrite performance sucks as a single block write requires 2 reads and 2\nwrites. If the RAID5 controller has enough RAM (and a decent\nimplementation), write performance can be almost equal to RAID1 (2\nwrites for a single block write).\n\nSo far\n-- vbi\n\n\n-- \nthis email is protected by a digital signature: http://fortytwo.ch/gpg\n\nNOTE: keyserver bugs! get my key here: https://fortytwo.ch/gpg/92082481", "msg_date": "09 Dec 2002 13:01:58 +0100", "msg_from": "Adrian 'Dagurashibanipal' von Bidder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Mon, Dec 09, 2002 at 01:01:58PM +0100, Adrian 'Dagurashibanipal' von Bidder wrote:\n> A bit more data is needed before anybody can give you more help:\n> - what is your budget?\n> - how big will your databases be?\n> - what's the read/write ratio?\n\nmy question as for now is purely theoretical. i'm not asking about any\nspecific situation, but me may talk about medium sized web size. budget\nis irrelevant (i'd like to talk *only* about harddrives, not memory,\narchitescure and so on).\n\n> - as you correctly said: distribute the load on many spindles. On a\n> busy database, 4*20G is probably faster than 1*80G\n\nas i said: i know that 3 disks are bettar than 1 (as for postgres\ninstallation, because system data and swap should be on 4th disc - but\nthis is obvious).\n\n> beyound this, experiences vary. RAID1 and RAID5 are rated differently by\n> different people - and especially with RAID5 there are (I think) really\n> performance differencies between the various products. RAID0 is fastest,\n> of course, but you probably care for your data.\n\nthat's exactly what i'm asking about: which raid is best suited for\nwhich data amongst out 3 sets (xlog, tables, indices). or maybe for some\ntypes of data single disc is better than raid for some strange reason?\nis it better to (when having 2 discs) setup raid 0/1 or to use tham\nseparatelly as xlog/tables?\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\nMďż˝j Boďż˝e, spraw abym milczaďż˝, dopďż˝ki siďż˝ nie upewniďż˝, ďż˝e naprawdďż˝ mam\ncoďż˝ do powiedzenia. (c) 1998 depesz\n\n", "msg_date": "Mon, 9 Dec 2002 14:05:21 +0100", "msg_from": "Hubert depesz Lubaczewski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "\nDepesz,\n\n> i have a question about best harddisk configuration for postgresql\n> performance.\n> of course i know that:\n> - scsi is better than ide\n> - 2 disks are better than 1\n> - 3 disks are better than 2\n> \n> i know that with 3 disks one should move xlog to one drive, index files\n> to second and tables to third.\n> that's clear.\n\nEr, no, it's not. In fact, for a 3-disk config, I reccommend:\n\nDisk 1: OS, swap, system logs\nDisk 2: Data + Indexes\nDisk 3: Transaction Log\n\n> but:\n> will making software raid on this discs provide performance increase or\n> decrease?\n\nHardware RAID can improve *read* performance, particilarly RAIDs 1, 01, and \n10. For writing, the best you can do is having it not inhibit performance. \nThe general testament is that *software* RAID does not improve things at all; \nactually, the best that can be said for Linux Software RAID 1 is that it does \nnot harm performance much.\n\n> i'd like to know what are the options to store all this information\n> (xlog, indices and tables). what configurations are best, what medium\n> and what should be avoided at all cost.\n\nAsk specific questions. If you want the full performance tutorial, you'd have \nto pay a steep fee for 1-3 days of training.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 9 Dec 2002 11:53:03 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "I don't know whether you have read this link but it was helpful to me.\n\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/0.html\n\nIt discusses PostgreSQL Hardware Performance Tuning.\n\nHope it helps!\n\nKeith Bottner\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Hubert\ndepesz Lubaczewski\nSent: Monday, December 09, 2002 7:05 AM\nTo: Adrian 'Dagurashibanipal' von Bidder;\[email protected]\nSubject: Re: [PERFORM] questions about disk configurations\n\n\nOn Mon, Dec 09, 2002 at 01:01:58PM +0100, Adrian 'Dagurashibanipal' von\nBidder wrote:\n> A bit more data is needed before anybody can give you more help:\n> - what is your budget?\n> - how big will your databases be?\n> - what's the read/write ratio?\n\nmy question as for now is purely theoretical. i'm not asking about any\nspecific situation, but me may talk about medium sized web size. budget\nis irrelevant (i'd like to talk *only* about harddrives, not memory,\narchitescure and so on).\n\n> - as you correctly said: distribute the load on many spindles. On a \n> busy database, 4*20G is probably faster than 1*80G\n\nas i said: i know that 3 disks are bettar than 1 (as for postgres\ninstallation, because system data and swap should be on 4th disc - but\nthis is obvious).\n\n> beyound this, experiences vary. RAID1 and RAID5 are rated differently \n> by different people - and especially with RAID5 there are (I think) \n> really performance differencies between the various products. RAID0 is\n\n> fastest, of course, but you probably care for your data.\n\nthat's exactly what i'm asking about: which raid is best suited for\nwhich data amongst out 3 sets (xlog, tables, indices). or maybe for some\ntypes of data single disc is better than raid for some strange reason?\nis it better to (when having 2 discs) setup raid 0/1 or to use tham\nseparatelly as xlog/tables?\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\nMój Boże, spraw abym milczał, dopóki się nie upewnię, że naprawdę mam\ncoś do powiedzenia. (c) 1998 depesz\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Mon, 9 Dec 2002 14:00:32 -0600", "msg_from": "\"Keith Bottner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Mon, 2002-12-09 at 07:05, Hubert depesz Lubaczewski wrote:\n> On Mon, Dec 09, 2002 at 01:01:58PM +0100, Adrian 'Dagurashibanipal' von Bidder wrote:\n> > A bit more data is needed before anybody can give you more help:\n> > - what is your budget?\n> > - how big will your databases be?\n> > - what's the read/write ratio?\n> \n> my question as for now is purely theoretical. i'm not asking about any\n> specific situation, but me may talk about medium sized web size. budget\n> is irrelevant (i'd like to talk *only* about harddrives, not memory,\n> architescure and so on).\n\nWhat is \"medium sized web\"? The *system* *is* important!! Stuffing\nyour box with RAM may, in fact, override your disks, if the RAM caches\nenough.\n\n> > - as you correctly said: distribute the load on many spindles. On a\n> > busy database, 4*20G is probably faster than 1*80G\n> \n> as i said: i know that 3 disks are bettar than 1 (as for postgres\n> installation, because system data and swap should be on 4th disc - but\n> this is obvious).\n> \n> > beyound this, experiences vary. RAID1 and RAID5 are rated differently by\n> > different people - and especially with RAID5 there are (I think) really\n> > performance differencies between the various products. RAID0 is fastest,\n> > of course, but you probably care for your data.\n> \n> that's exactly what i'm asking about: which raid is best suited for\n> which data amongst out 3 sets (xlog, tables, indices). or maybe for some\n> types of data single disc is better than raid for some strange reason?\n> is it better to (when having 2 discs) setup raid 0/1 or to use tham\n> separatelly as xlog/tables?\n\nThese are *GENERALITIES*!!!! _All_ is dependent on which SCSI\ncontroller you choose, and how much cache it has!!!!!!!!\n\n- RAID0 does *great* at both reading and writing, but everyone knows\nthat it is insecure.\n- RAID1 does better than JBOD at reading and writing, but not as\ngood as RAID0.\n- RAID01 and RAID10 do just about as good as RAID0.\n- RAID5 does great with reads, but bad with writes, *unless* the\ncontroller has *lots* of cache. Then, write speeds are great.\n\nSlightly off topic: if I have Important Data, then I would not trust\na caching controlller unless it has a battery backup. Unfortunately,\nthe only \"caching controlllers with battery backup\" that I've seen\nare pretty expensive...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "09 Dec 2002 14:46:40 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "> \n> Er, no, it's not. In fact, for a 3-disk config, I reccommend:\n> \n> Disk 1: OS, swap, system logs\n> Disk 2: Data + Indexes\n> Disk 3: Transaction Log\n\nWhat is the accepted way of splitting the data from pg_xlog?\n\nI've been testing some configurations for low budget performance, and I haven't been able to make this help vs. one disk. (under osx, ymmv)\n\nI rsync'd the pg_xlog directory to another disk, then set up a symlink pointing from the data/pg_xlog to /other/disk/pg_xlog. \n\nI then got tps numbers that were 2/3 of the single ide drive speed. The only explanation I can come up with is that something is seeking to the symlink, then doing the actual write on the other drive. \n\nI'm going to try this under linux using mount points, but I need to shuffle hardware first. \n\n\nany ideas?\n\neric\n\n\n\n", "msg_date": "Mon, 09 Dec 2002 13:42:02 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "\nEric,\n\n> I'm going to try this under linux using mount points, but I need to shuffle \nhardware first. \n\nThis is the only way I've done it. I'm not sure what the Mac problem is.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 9 Dec 2002 14:14:30 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Mon, 9 Dec 2002, eric soroos wrote:\n\n> > \n> > Er, no, it's not. In fact, for a 3-disk config, I reccommend:\n> > \n> > Disk 1: OS, swap, system logs\n> > Disk 2: Data + Indexes\n> > Disk 3: Transaction Log\n> \n> What is the accepted way of splitting the data from pg_xlog?\n\nYou really can't split it so to speak. It all needs to be in one place. \nOr do you mean splitting the load? Maybe putting it onto a RAID0 \npartition, but that's chancy.\n\n> I've been testing some configurations for low budget performance, and I \n> haven't been able to make this help vs. one disk. (under osx, ymmv)\n\nI haven't found anything that helps much either, except for fast drives.\n\nYou can, however, turn on the noatime mounting option under Linux (BSD has \nsomething similar) and it should help speed things up on any file system. \nYou can also try turning on the async option, but I'm not sure this is a \nproblem or not for data integrity on a transaction log file system. \nComments?\n\n> I rsync'd the pg_xlog directory to another disk, then set up a symlink \n> pointing from the data/pg_xlog to /other/disk/pg_xlog. \n>\n> I then got tps numbers that were 2/3 of the single ide drive speed. The \n> only explanation I can come up with is that something is seeking to the \n> symlink, then doing the actual write on the other drive. \n\nrsync isn't still running is it? you can just use the cp command while \nthe database is shut down to move the pg_xlog dir. like so:\n\npg_ctl stop\nmkdir /mnt/bigdog/pg_xlog\nchown postgres.postgres /mnt/bigdog/pg_xlog\nchmod 700 /mnt/bigdog/pg_xlog\ncd $PGDATA\ncp -Rfp pg_xlog/* /mnt/bigdog/pg_xlog/\nmv pg_xlog pg_xlog.old (I always keep stuff till I'm sure I really don't \nneed it.)\nln -s /mnt/bigdog/pg_xlog pg_xlog\npg_ctl start\n\n\n\nDon't forget, noatime in the mount options, makes a big difference.\n\n", "msg_date": "Mon, 9 Dec 2002 15:22:11 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "\n\n> > I've been testing some configurations for low budget performance, and I \n> > haven't been able to make this help vs. one disk. (under osx, ymmv)\n> \n> I haven't found anything that helps much either, except for fast drives.\n\n\n> \n> You can, however, turn on the noatime mounting option under Linux (BSD has \n> something similar) and it should help speed things up on any file system.\n\nI don't think that's an option for osx/hfs mounts, at least mount doesn't list it. (not that mount really works on 10.1.x, but whatever)\n \n> You can also try turning on the async option, but I'm not sure this is a \n> problem or not for data integrity on a transaction log file system. \n> Comments?\n\nfrom man mount:\n\n async All I/O to the file system should be done asynchronously.\n This is a dangerous flag to set, and should not be used\n unless you are prepared to recreate the file system\n should your system crash.\n\n\nI'd guess that this is about the same as fsync = off, except that it's your os lying to you instead of your database.\n\n> \n> > I rsync'd the pg_xlog directory to another disk, then set up a symlink \n> > pointing from the data/pg_xlog to /other/disk/pg_xlog. \n> >\n> > I then got tps numbers that were 2/3 of the single ide drive speed. The \n> > only explanation I can come up with is that something is seeking to the \n> > symlink, then doing the actual write on the other drive. \n> \n> rsync isn't still running is it? you can just use the cp command while \n> the database is shut down to move the pg_xlog dir. like so:\n\nrsync == copy, it's just that I remember the command line switches for it. \n\n> pg_ctl stop\n> mkdir /mnt/bigdog/pg_xlog\n> chown postgres.postgres /mnt/bigdog/pg_xlog\n> chmod 700 /mnt/bigdog/pg_xlog\n> cd $PGDATA\n> cp -Rfp pg_xlog/* /mnt/bigdog/pg_xlog/\n> mv pg_xlog pg_xlog.old (I always keep stuff till I'm sure I really don't \n> need it.)\n> ln -s /mnt/bigdog/pg_xlog pg_xlog\n> pg_ctl start\n> \n\nThis is about what I did, except that /mnt/bigdog/pg_xlog == /Volumes/scsi1. \n\nWhere you can do something different is mount bigdog at data/pg_xlog, instead of using the symlinks. Given the interesting state of filesystem tools under osx, I can't really do that. (at least under 10.1.5, looks like the laptop running 10.2 has a little more info. not that the laptop has room for a 3.5\" 10k rpm scsi drive & pci scsi card for testing...)\n\neric\n\n\n\n", "msg_date": "Mon, 09 Dec 2002 14:44:31 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Mon, 9 Dec 2002, Josh Berkus wrote:\n\n> \n> Depesz,\n> \n> > i have a question about best harddisk configuration for postgresql\n> > performance.\n> > of course i know that:\n> > - scsi is better than ide\n> > - 2 disks are better than 1\n> > - 3 disks are better than 2\n> > \n> > i know that with 3 disks one should move xlog to one drive, index files\n> > to second and tables to third.\n> > that's clear.\n> \n> Er, no, it's not. In fact, for a 3-disk config, I reccommend:\n> \n> Disk 1: OS, swap, system logs\n> Disk 2: Data + Indexes\n> Disk 3: Transaction Log\n\nActually, first I'd try one big RAID 5 and see how it runs. THEN I'd \nspend time mucking around with different configs if that wasn't fast \nenough. If you need x performance and get 10x with a RAID 5 then move on \nto more interesting problems.\n\n> > but:\n> > will making software raid on this discs provide performance increase or\n> > decrease?\n> \n> Hardware RAID can improve *read* performance, particilarly RAIDs 1, 01, and \n> 10. For writing, the best you can do is having it not inhibit performance. \n> The general testament is that *software* RAID does not improve things at all; \n> actually, the best that can be said for Linux Software RAID 1 is that it does \n> not harm performance much.\n\nNot in my experience. I'd estimate my test box with dual 18 Gig UW scsis \nruns about 1.5 to 1.8 times faster with the two drives in a RAID1 as if \na single one is used. Bonnie confirms this. single drive can read about \n25 Megs a second, a pair in a RAID1 reads at about 48 Megs a second.\n\nBut as you pointed out in your reply, it's more important to look at how \nhe's gonna drive the database. If it has to input hundreds of short \nqueries a second, that's a whole different problem than a data warehouse \nwith 500 people throwing 8 way joins at the data all day.\n\n\n\n", "msg_date": "Mon, 9 Dec 2002 16:00:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "\nScott,\n\n> Actually, first I'd try one big RAID 5 and see how it runs. THEN I'd \n> spend time mucking around with different configs if that wasn't fast \n> enough. If you need x performance and get 10x with a RAID 5 then move on \n> to more interesting problems.\n\nDepends on how much time you have to spend re-installing. IMHO, RAID 5 is \nslower that straight disks for Postgres, especially with large numbers of \nwrites. This may not be true for $1000 RAID controllers, but I have yet to \nuse one.\n\nI have a box with a low-end RAID 5 controller, and it drives like a single IDE \ndrive on large UPDATE queries. Slower, somethimes.\n\n> Not in my experience. I'd estimate my test box with dual 18 Gig UW scsis \n> runs about 1.5 to 1.8 times faster with the two drives in a RAID1 as if \n> a single one is used. Bonnie confirms this. single drive can read about \n> 25 Megs a second, a pair in a RAID1 reads at about 48 Megs a second.\n\nThis is Linux software RAID?\n\n> But as you pointed out in your reply, it's more important to look at how \n> he's gonna drive the database. If it has to input hundreds of short \n> queries a second, that's a whole different problem than a data warehouse \n> with 500 people throwing 8 way joins at the data all day.\n\nDefinitely.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Mon, 9 Dec 2002 15:26:36 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Mon, 9 Dec 2002, Josh Berkus wrote:\n\n> \n> Scott,\n> \n> > Actually, first I'd try one big RAID 5 and see how it runs. THEN I'd \n> > spend time mucking around with different configs if that wasn't fast \n> > enough. If you need x performance and get 10x with a RAID 5 then move on \n> > to more interesting problems.\n> \n> Depends on how much time you have to spend re-installing. IMHO, RAID 5 is \n> slower that straight disks for Postgres, especially with large numbers of \n> writes. This may not be true for $1000 RAID controllers, but I have yet to \n> use one.\n\nEven the fastest RAID 5 boxes aren't superfast, but a RAID5 of 15k drives \nwith a lot of drive in it does OK, since it can 1: spread small writes \naround on many different drives (i.e. if you have 12 drives, and a lot of \nsmall writes, a lot of them will be on different drives.) as well as \nspreading out random reads, while providing good large reads, i.e. \nsequential scans.\n\nThe key to good RAID 5 is to throw as many drives as you possibly can at a\nproblem, preferably across several SCSI interfaces. Or FC-AL.\n\n> I have a box with a low-end RAID 5 controller, and it drives like a single IDE \n> drive on large UPDATE queries. Slower, somethimes.\n\nMany low end RAID 5 controllers are pretty slow. The adaptec AIC133 \nseries (I think that's the right number) are total dogs. The older AMI \nMega raids were fast for their day, but any decent 350 MHz machine with a \ndual channed SymBIOS card will outrun it at RAID 5.\n\n> > Not in my experience. I'd estimate my test box with dual 18 Gig UW scsis \n> > runs about 1.5 to 1.8 times faster with the two drives in a RAID1 as if \n> > a single one is used. Bonnie confirms this. single drive can read about \n> > 25 Megs a second, a pair in a RAID1 reads at about 48 Megs a second.\n> \n> This is Linux software RAID?\n\nYep. The kernel level drivers are quite fast in my experience, but they \ndon't seem to give any improvement when layered (i.e. 1+0 or 0+1) over \nwhatever is the slowest of the two layers. I.e. setting up a RAID5 of \nRAID0s results in almost the exact same performance as if you'd just setup \nthe same number of drives under RAID 5 as you had mirror sets in RAID0. \nSince this is the case, you get better performance just going to RAID 5 \nwith twice the disks and twice (-1n) the space.\n\n\n", "msg_date": "Mon, 9 Dec 2002 17:06:20 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> I'm going to try this under linux using mount points, but I need to shuffle \n> hardware first. \n\n> This is the only way I've done it. I'm not sure what the Mac problem is.\n\nIt sounds like OS X fails to optimize repeated lookups of the same\nsymlink. I haven't tried to do any performance measurement of this\nmyself, but if true a gripe to Apple would be in order. Most of the\ndesigns I've seen for clean tablespace handling will depend on symlinks\nmuch more than we do today, so a performance penalty for symlinks will\n*really* hurt further down the road.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Dec 2002 19:08:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations " }, { "msg_contents": "i'm replying to my own letter to gather all replies in one mail.\n\ni got some replies. some of them are useful. some aren't really. i'm not\nreally sure how the usage can modify \"what is best for some part of\ndatabase files\".\ncan you explain me how comes that for some uses it's best\n(performance-wise) to keep xlog's on straight disc, and tables on raid5\nwith lots' of disks, and for some other uses it's better to keep xlog on\nraid0 and tables on raid 10?\n\nanyway: what i understood is that usually the best (performance-wise),\nwould be to put:\nxlog - separate - unraid'ed disk, or raid0\ntables - any raid, but not raid 1\nindices - any raid, but not raid 1\n\nthanks for all replies.\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\nMój Boże, spraw abym milczał, dopóki się nie upewnię, że naprawdę mam\ncoś do powiedzenia. (c) 1998 depesz", "msg_date": "Tue, 10 Dec 2002 22:16:03 +0100", "msg_from": "Hubert depesz Lubaczewski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "On Tue, 2002-12-10 at 15:16, Hubert depesz Lubaczewski wrote:\n[snip]\n> anyway: what i understood is that usually the best (performance-wise),\n> would be to put:\n> xlog - separate - unraid'ed disk, or raid0\n\nUnless your data is *easily* recreatable, NEVER RAID0!!\n\n> tables - any raid, but not raid 1\n> indices - any raid, but not raid 1\n\nWhy not RAID1 (mirroring)? It speeds up both reads and writes.\n\nThings are also dependent on the RAID controller, since they\nall have different strengths and weaknesses.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "10 Dec 2002 17:13:35 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: questions about disk configurations" }, { "msg_contents": "I have two tables A and B where A is a huge table with thousands of rows, B \nis a small table with only a couple of entries.\n\nI want to do something like\n\nSELECT\n\tA.ID\n\tA.Name\nFROM\n\tA JOIN B ON (A.ID = B.ID)\n\nAnd on the other hand I can have something like this\n\nSELECT\n\tA.ID\n\tA.Name\nFROM\n\tA\nWHERE\n\tA.ID IN (B_Id_List)\n\nB_Id_List is a string concatenation of B.ID. (ie, 1,2,3,4,5 ...)\n\nWhich one is faster, more efficient?\n\nAnd if you could, which one is faster/more efficient under MS SQL Server 7? \nI am trying to develop a cross platform query, that is why I need to \nconcern with performance under different databases.\n\nThanks a lot!\n\nWei\n\n", "msg_date": "Wed, 11 Dec 2002 00:20:57 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Which of the solution is better?" }, { "msg_contents": "On Tue, 2002-12-10 at 23:20, Wei Weng wrote:\n> I have two tables A and B where A is a huge table with thousands of rows, B \n> is a small table with only a couple of entries.\n> \n> I want to do something like\n> \n> SELECT\n> \tA.ID\n> \tA.Name\n> FROM\n> \tA JOIN B ON (A.ID = B.ID)\n\nHow is this query any different from:\nSELECT\n\tA.ID,\n\tA.Name\nFROM\n\tA,\n\tB\nWHERE \n\tA.ID = B.ID\n\n> And on the other hand I can have something like this\n> \n> SELECT\n> \tA.ID\n> \tA.Name\n> FROM\n> \tA\n> WHERE\n> \tA.ID IN (B_Id_List)\n> \n> B_Id_List is a string concatenation of B.ID. (ie, 1,2,3,4,5 ...)\n> \n> Which one is faster, more efficient?\n> \n> And if you could, which one is faster/more efficient under MS SQL Server 7? \n> I am trying to develop a cross platform query, that is why I need to \n> concern with performance under different databases.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "10 Dec 2002 23:46:04 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which of the solution is better?" }, { "msg_contents": "On Wed, Dec 11, 2002 at 11:26:20AM -0500, Wei Weng wrote:\n> I don't think there is any. It is just another way to write an outer\n> join.\n\nThat's not exactly true. Doing A JOIN B ON (A.ID=B.ID) constrains\nthe planner. See the section on explicit join order in the\nPostgreSQL manual.\n\nThe IN locution, by the way, is almost always bad in Postgres. Avoid\nit.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 11 Dec 2002 10:48:28 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which of the solution is better?" }, { "msg_contents": "I don't think there is any. It is just another way to write an outer\njoin.\n\nOn Wed, 2002-12-11 at 00:46, Ron Johnson wrote:\n> On Tue, 2002-12-10 at 23:20, Wei Weng wrote:\n> > I have two tables A and B where A is a huge table with thousands of rows, B \n> > is a small table with only a couple of entries.\n> > \n> > I want to do something like\n> > \n> > SELECT\n> > \tA.ID\n> > \tA.Name\n> > FROM\n> > \tA JOIN B ON (A.ID = B.ID)\n> \n> How is this query any different from:\n> SELECT\n> \tA.ID,\n> \tA.Name\n> FROM\n> \tA,\n> \tB\n> WHERE \n> \tA.ID = B.ID\n> \n> > And on the other hand I can have something like this\n> > \n> > SELECT\n> > \tA.ID\n> > \tA.Name\n> > FROM\n> > \tA\n> > WHERE\n> > \tA.ID IN (B_Id_List)\n> > \n> > B_Id_List is a string concatenation of B.ID. (ie, 1,2,3,4,5 ...)\n> > \n> > Which one is faster, more efficient?\n> > \n> > And if you could, which one is faster/more efficient under MS SQL Server 7? \n> > I am trying to develop a cross platform query, that is why I need to \n> > concern with performance under different databases.\n-- \nWei Weng\nNetwork Software Engineer\nKenCast Inc.\n\n\n", "msg_date": "11 Dec 2002 11:26:20 -0500", "msg_from": "Wei Weng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which of the solution is better?" } ]
[ { "msg_contents": "Hi:\n\n Is the performance overhead of creating a\nmulti-column index greater than creating an individual\nindex for each column? (i.e. Is the INSERT slower for\na table with a three column index than a similar table\nwith three single column indices?).\n\n I was wondering this because according to\nPostgreSQL Manual in the section on multi-columned\nindexes \"Multicolumn indexes should be used sparingly.\nMost of the time, an index on a single column is\nsufficient and saves space and time\".\n\n\nThank you very much,\n\nludwig\n\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Tue, 10 Dec 2002 08:55:31 -0800 (PST)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of multi-column index on INSERT" }, { "msg_contents": "Ludwig Lim kirjutas T, 10.12.2002 kell 21:55:\n> Hi:\n> \n> Is the performance overhead of creating a\n> multi-column index greater than creating an individual\n> index for each column? (i.e. Is the INSERT slower for\n> a table with a three column index than a similar table\n> with three single column indices?).\n> \n> I was wondering this because according to\n> PostgreSQL Manual in the section on multi-columned\n> indexes \"Multicolumn indexes should be used sparingly.\n> Most of the time, an index on a single column is\n> sufficient and saves space and time\".\n\nYou should create only as much indexes as you really need.\n\nA multi-column index can not usually replace multiple single column\nindexes and vice versa. \n\nFor example, while an index on a,b,c can be used for search on both a\nand b it will not be used for search on b and c and will be used like\nindex on a for search on a and c.\n\nWhile a multi-column index is slower than a single-column one, it is\ndefinitely faster than multiple single column indexes - one 3-column\nindex should always be faster than 3 single-column indexes.\n\nAlso, currently even with multiple single-column indexes on a,b and c\nthe search on a and c will use only one index, either on a or c.\n\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "11 Dec 2002 01:50:22 +0500", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of multi-column index on INSERT" } ]
[ { "msg_contents": "I'm a bit confused.\n\nIn 7.3 is it possible to use GIST without using any of the stuff in \ncontrib/? If it is, how can it be done and in which cases should it be done?\n\nThe pgsql docs about indexes keep talking about GIST here and there, but I \ncan't seem to use them on anything. And there's no gist in the \"ops\" and \naccess method listing.\n\nHaving the docs say Postgresql provides GIST as one of the four access \nmethods, GIST supports multicolumn indexes, GIST etc, is just confusing if \nthe docs pertaining to indexes don't also say that in a default postgresql \ninstallation you cannot create an index using GIST (if you can actually \ncreate a GIST index \"out of box\", how??).\n\nAnother thing: is Eugene Selkov's 1998 message on GIST indexes in the 7.3 \ndocs (see GIST Indexes) still valid? There's mention of Postgresql 6.3 and \npostgres95 there too.\n\nBTW, 7.3 is GREAT! Multiple col/row returns, prepare queries, schemas etc. \nAlso set enable_seq_scan=off can get rolled back to whatever it was before \nnow right? Cool, coz I have to force index use for a particular select.\n\nThanks to the postgresql dev team and everyone involved!\n\nCheerio,\nLink.\n\n", "msg_date": "Thu, 12 Dec 2002 02:07:09 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": true, "msg_subject": "Docs: GIST" }, { "msg_contents": "Lincoln Yeoh <[email protected]> writes:\n> I'm a bit confused.\n> In 7.3 is it possible to use GIST without using any of the stuff in \n> contrib/?\n\nNo, because there are no GIST opclasses in the standard installation.\nThey are only in contrib.\n\nYes, that's a bit silly. As GIST improves out of the \"academic toy\"\ncategory into the \"production tool\" category, I expect we will migrate\nGIST opclasses into the standard installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Dec 2002 00:41:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Docs: GIST " }, { "msg_contents": "I did figure it out eventually but it'll be clearer to mention that in the \ndocs - e.g. the only way to use GIST is to use the stuff in contrib. Coz I \nhad a bit of wishful thinking - thought that maybe some bits of GIST might \nhave at least become useable by default in 7.3 e.g. the simpler stuff (the \ndocs didn't quite contradict that).\n\nDefinitely not asking for it to be rushed in tho. Software is more reliable \nwhen the developers know what they are doing, and they get to release stuff \nwhen they think it's ready, not when others say it is.\n\nCheerio,\nLink.\n\nAt 12:41 AM 12/12/02 -0500, Tom Lane wrote:\n\n>Lincoln Yeoh <[email protected]> writes:\n> > I'm a bit confused.\n> > In 7.3 is it possible to use GIST without using any of the stuff in\n> > contrib/?\n>\n>No, because there are no GIST opclasses in the standard installation.\n>They are only in contrib.\n\n\n", "msg_date": "Fri, 13 Dec 2002 05:02:28 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Docs: GIST " }, { "msg_contents": "Hi all,\n\nI just read about the cluster command and was a little (very)\ndisapointed.\nClustered tables do not remain clustered after inserts.\nClustered tables are usefull when the table is very large and there are\nfew different keys.\n\n\nBecause the table file is already extended (2G limit) using different\nfiles extension (.N)\nhow complicated (modifying the code) would it be to have the table files\nsplit according to the cluster key?\n\nThis would:\n\nGreatly improve performance when the cluster key in included in search\ncriteria.\nAllow for a much larger table before a file has to be split (.N).\nSimplify the management of symblinks (that's something else we need to\nlook at).\nThe index file for that field would no longer be required.\n\nOf course, there should be only one cluster key per table.\nThe length the \"key\" should be short and the number of unique key should\nbe low as well.\n\nSO... ?\n\nJLL\n", "msg_date": "Thu, 12 Dec 2002 16:31:46 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "CLUSTER command" }, { "msg_contents": "Oh, and something else, \n\nI think the syntax should be:\n\nCluster <table> on <attribute>\n\n\nMaybe inheritance can be use here. \nThe problem is creating the new \"table\" when a new key is detected.\nI know, I can use rules, but the optimiser is not aware of the\nclustering.\n\nEnough from me for now.\n\nWhat do you think?\n\nJLL\n\n\nJean-Luc Lachance wrote:\n> \n> Hi all,\n> \n> I just read about the cluster command and was a little (very)\n> disapointed.\n> Clustered tables do not remain clustered after inserts.\n> Clustered tables are usefull when the table is very large and there are\n> few different keys.\n> \n> Because the table file is already extended (2G limit) using different\n> files extension (.N)\n> how complicated (modifying the code) would it be to have the table files\n> split according to the cluster key?\n> \n> This would:\n> \n> Greatly improve performance when the cluster key in included in search\n> criteria.\n> Allow for a much larger table before a file has to be split (.N).\n> Simplify the management of symblinks (that's something else we need to\n> look at).\n> The index file for that field would no longer be required.\n> \n> Of course, there should be only one cluster key per table.\n> The length the \"key\" should be short and the number of unique key should\n> be low as well.\n> \n> SO... ?\n> \n> JLL\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n", "msg_date": "Thu, 12 Dec 2002 16:40:24 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "\nOn Thu, 12 Dec 2002, Jean-Luc Lachance wrote:\n\n> Hi all,\n>\n> I just read about the cluster command and was a little (very)\n> disapointed.\n> Clustered tables do not remain clustered after inserts.\n> Clustered tables are usefull when the table is very large and there are\n> few different keys.\n>\n>\n> Because the table file is already extended (2G limit) using different\n> files extension (.N)\n> how complicated (modifying the code) would it be to have the table files\n> split according to the cluster key?\n\nI'd vote against changing the existing CLUSTER since the existing CLUSTER\nwhile not great does handle many different key values fairly well as well\nand this solution wouldn't. Many different key values are still\nuseful to cluster if you're doing searches over ranges since it lowers the\nnumber of heap file reads necessary. If done this should probably be\nseparate from the existing cluster or at least both versions should be\npossible.\n\n\n", "msg_date": "Thu, 12 Dec 2002 14:03:56 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER command" }, { "msg_contents": "The current cluster command is equivalant to:\n\ncreate b as select * from a order by i;\n\nSo you would not be loosing anything.\n\n\n\nStephan Szabo wrote:\n> \n> On Thu, 12 Dec 2002, Jean-Luc Lachance wrote:\n> \n> > Hi all,\n> >\n> > I just read about the cluster command and was a little (very)\n> > disapointed.\n> > Clustered tables do not remain clustered after inserts.\n> > Clustered tables are usefull when the table is very large and there are\n> > few different keys.\n> >\n> >\n> > Because the table file is already extended (2G limit) using different\n> > files extension (.N)\n> > how complicated (modifying the code) would it be to have the table files\n> > split according to the cluster key?\n> \n> I'd vote against changing the existing CLUSTER since the existing CLUSTER\n> while not great does handle many different key values fairly well as well\n> and this solution wouldn't. Many different key values are still\n> useful to cluster if you're doing searches over ranges since it lowers the\n> number of heap file reads necessary. If done this should probably be\n> separate from the existing cluster or at least both versions should be\n> possible.\n", "msg_date": "Thu, 12 Dec 2002 17:15:37 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER command" }, { "msg_contents": "On Thu, Dec 12, 2002 at 02:03:56PM -0800, Stephan Szabo wrote:\n> I'd vote against changing the existing CLUSTER since the existing\n> CLUSTER while not great does handle many different key values fairly\n> well as well and this solution wouldn't.\n\nI would agree. What's being proposed sounds much more like table\npartitioning than clustering.\n\nThat's not to say that the existing CLUSTER couldn't be improved, at\nthe very least to the point where it allows inserts to respect the\nclustered structure. That's a post for another thread, though.\n\n-johnnnnnnnnnnn\n", "msg_date": "Thu, 12 Dec 2002 16:26:41 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CLUSTER command" }, { "msg_contents": "On Thu, 12 Dec 2002, Jean-Luc Lachance wrote:\n\n> The current cluster command is equivalant to:\n>\n> create b as select * from a order by i;\n>\n> So you would not be loosing anything.\n\nExcept for the fact that the CLUSTER is intended (although\nI don't know if it does yet) to retain things like constraints\nand other indexes which the above doesn't.\n\n", "msg_date": "Thu, 12 Dec 2002 14:27:02 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CLUSTER command" }, { "msg_contents": "OK fine,\n\nLet's create a new command:\n\nPARTITION <table> ON <attribute>\n\nI did not want to start a fight. You can keep the CLUSTER command as it\nis.\n\nI still think clustering/partitioning would be a great idea.\nThis is what I want to talk about. Look at the original post for the\nreasons.\n\n\nJLL\n\n\n\njohnnnnnn wrote:\n> \n> On Thu, Dec 12, 2002 at 02:03:56PM -0800, Stephan Szabo wrote:\n> > I'd vote against changing the existing CLUSTER since the existing\n> > CLUSTER while not great does handle many different key values fairly\n> > well as well and this solution wouldn't.\n> \n> I would agree. What's being proposed sounds much more like table\n> partitioning than clustering.\n> \n> That's not to say that the existing CLUSTER couldn't be improved, at\n> the very least to the point where it allows inserts to respect the\n> clustered structure. That's a post for another thread, though.\n> \n> -johnnnnnnnnnnn\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Thu, 12 Dec 2002 17:39:44 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "Splitting table files by indexed value may not help if the operating system \ndoesn't manage to keep the tables unfragmented on disk. I suppose the O/S \nshould know how to do it though.\n\nCheerio,\nLink.\n\nAt 04:31 PM 12/12/02 -0500, Jean-Luc Lachance wrote:\n\n>Hi all,\n>\n>I just read about the cluster command and was a little (very)\n>disapointed.\n>Clustered tables do not remain clustered after inserts.\n>Clustered tables are usefull when the table is very large and there are\n>few different keys.\n>\n>\n>Because the table file is already extended (2G limit) using different\n>files extension (.N)\n>how complicated (modifying the code) would it be to have the table files\n>split according to the cluster key?\n\n\n", "msg_date": "Fri, 13 Dec 2002 06:49:12 +0800", "msg_from": "Lincoln Yeoh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CLUSTER command" }, { "msg_contents": "Hi -\nI've been running PostgreSQL 7.3 on Mac OS X 10.2 since it was released \nand it's been running fine. I'm using pyPgSQL 2.3 for client side \nprogramming which also was working great until tonight. Now whenever \nI do any query of any type, I get warnings like this:\n\nWARNING: PerformPortalClose: portal \"pgsql_00179f10\" not found\n\nIt \"appears\" that everything is still working the way it was but it's a \nbit discomforting to have these show up on my screen while running my \napplications.\n\nAnyone that can explain this?\n\nHere's a tiny bit of Python sample code that I used to make sure it \nwasn't my other code causing the problems\n\nfrom pyPgSQL import PgSQL\n\ndbname = \"template1\"\nconn = PgSQL.connect(database=dbname)\ncursor = conn.cursor()\nsql = \"SELECT now()\";\ncursor.execute(sql)\nres = cursor.fetchall()\nfor i in res:\n\tprint i\ncursor.close()\nconn.commit()\n\nstrangely if I remove the last 2 lines (cursor.close() and \nconn.commit()) I don't get the errors.\n\nAlso I don't notice that I don't have this problem with psql command \nline either. Is this the Python API causing this?\n\nThanks for any help\n\nMike\n\n", "msg_date": "Thu, 12 Dec 2002 17:50:55 -0500", "msg_from": "Michael Engelhart <[email protected]>", "msg_from_op": false, "msg_subject": "PerformPortalClose warning in 7.3" }, { "msg_contents": "On Thu, Dec 12, 2002 at 05:39:44PM -0500, Jean-Luc Lachance wrote:\n> Let's create a new command:\n> \n> PARTITION <table> ON <attribute>\n<snip>\n> Because the table file is already extended (2G limit) using\n> different files extension (.N)\n> how complicated (modifying the code) would it be to have the table\n> files split according to the cluster key?\n\nI think the code changes would be complicated. Just at a 30-second\nconsideration, this would need to touch:\n- all sql (selects, inserts, updates, deletes)\n- vacuuming\n- indexing\n- statistics gathering\n- existing clustering\n\nThat's not to say it's not worthwhile to look into, but it's big.\n\nAll of that aside, a view over unions is possible now:\n\ncreate table u1 (...);\ncreate table u2 (...);\ncreate table u3 (...);\n\ncreate view uv as (select \"A\" as partition_key, ... from u1\n union all\n select \"B\" as partition_key, ... from u2\n union all\n select \"C\" as partition_key, ... from u3);\n\nThat keeps the tables in different files on-disk while still allowing\nyou to query against all of them. You need to index them separately\nand logic is necessary when changing data.\n\nHope that helps.\n\n-johnnnnnnnnnn\n", "msg_date": "Thu, 12 Dec 2002 17:00:02 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CLUSTER command" }, { "msg_contents": "On Thu, 12 Dec 2002, johnnnnnn wrote:\n\n> On Thu, Dec 12, 2002 at 05:39:44PM -0500, Jean-Luc Lachance wrote:\n> > Let's create a new command:\n> >\n> > PARTITION <table> ON <attribute>\n> <snip>\n> > Because the table file is already extended (2G limit) using\n> > different files extension (.N)\n> > how complicated (modifying the code) would it be to have the table\n> > files split according to the cluster key?\n>\n\n> I think the code changes would be complicated. Just at a 30-second\n> consideration, this would need to touch:\n> - all sql (selects, inserts, updates, deletes)\n> - vacuuming\n> - indexing\n> - statistics gathering\n> - existing clustering\n\nI think his idea was to treat it similarly to the way that the\nsystem treats tables >2G with .N files. The only thing is that\nI believe the code that deals with that wouldn't be particularly\neasy to change to do it though, but I've only taken a cursory look at\nwhat I think is the place that does that(storage/smgr/md.c). Some sort of\ngood partitioning system would be nice though.\n\n\n> create table u1 (...);\n> create table u2 (...);\n> create table u3 (...);\n>\n> create view uv as (select \"A\" as partition_key, ... from u1\n> union all\n> select \"B\" as partition_key, ... from u2\n> union all\n> select \"C\" as partition_key, ... from u3);\n>\n> That keeps the tables in different files on-disk while still allowing\n> you to query against all of them. You need to index them separately\n> and logic is necessary when changing data.\n\nUnfortunately, I think that the optimizer isn't going to do what you'd\nhope here and scan only the appropriate table if you were to say\npartition_key='A' and foo='bar'. I'd love to be shown that I'm wrong, but\nthe best I could see hoping for would be that if partition_key was part of\nu1-u3 and there was an index on partition_key,foo that it could use that\nand do minimal work on the other tables.\n\nIn addition, doing something like the above is a nightmare if you don't\nknow beforehand what the partitions should be (for example if you know\nthere aren't alot of distinct values, but you don't know what they are) or\nfor that matter even with 10-15 partitions, writing the rules and such\nwould probably be really error prone.\n\n", "msg_date": "Thu, 12 Dec 2002 16:03:47 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "On Thu, Dec 12, 2002 at 04:03:47PM -0800, Stephan Szabo wrote:\n> On Thu, 12 Dec 2002, johnnnnnn wrote:\n> \n> > I think the code changes would be complicated. Just at a 30-second\n> > consideration, this would need to touch:\n> > - all sql (selects, inserts, updates, deletes)\n> > - vacuuming\n> > - indexing\n> > - statistics gathering\n> > - existing clustering\n> \n> I think his idea was to treat it similarly to the way that the\n> system treats tables >2G with .N files. The only thing is that\n> I believe the code that deals with that wouldn't be particularly\n> easy to change to do it though, but I've only taken a cursory look at\n> what I think is the place that does that(storage/smgr/md.c). Some sort of\n> good partitioning system would be nice though.\n\nI don't think this is doable without a huge amount of work. The storage\nmanager doesn't know anything about what is in a page, let alone a\ntuple. And it shouldn't, IMHO. Upper levels don't know how are pages\norganized in disk; they don't know about .1 segments and so on, and they\nshouldn't.\n\nI think this kind of partition doesn't buy too much. I would really\nlike to have some kind of auto-clustering, but it should be implemented\nin some upper level; e.g., by leaving some empty space in pages for\nfuture tuples, and arranging the whole heap again when it runs out of\nfree space somewhere. Note that this is very far from the storage\nmanager.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"La realidad se compone de muchos sue�os, todos ellos diferentes,\npero en cierto aspecto, parecidos...\" (Yo, hablando de sue�os er�ticos)\n", "msg_date": "Thu, 12 Dec 2002 21:47:19 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "I think Oracle does something like this with its clustering. You set a \n%fill and Oracle uses this when doing inserts into a segment and when to \nadd a new one. There is also some control over the grouping of data \nwithin a page. I don't have an Oracle manual present, but I think the \nclustering works on a specific index. \n\nI agree that adding auto-clustering would be a very good thing and that \nwe can learn about functionality by studying what other applications \nhave already done and if/how those strategies were successful.\n\nCharlie\n\n\nAlvaro Herrera wrote:\n\n>On Thu, Dec 12, 2002 at 04:03:47PM -0800, Stephan Szabo wrote:\n> \n>\n>>On Thu, 12 Dec 2002, johnnnnnn wrote:\n>>\n>> \n>>\n>>>I think the code changes would be complicated. Just at a 30-second\n>>>consideration, this would need to touch:\n>>>- all sql (selects, inserts, updates, deletes)\n>>>- vacuuming\n>>>- indexing\n>>>- statistics gathering\n>>>- existing clustering\n>>> \n>>>\n>>I think his idea was to treat it similarly to the way that the\n>>system treats tables >2G with .N files. The only thing is that\n>>I believe the code that deals with that wouldn't be particularly\n>>easy to change to do it though, but I've only taken a cursory look at\n>>what I think is the place that does that(storage/smgr/md.c). Some sort of\n>>good partitioning system would be nice though.\n>> \n>>\n>\n>I don't think this is doable without a huge amount of work. The storage\n>manager doesn't know anything about what is in a page, let alone a\n>tuple. And it shouldn't, IMHO. Upper levels don't know how are pages\n>organized in disk; they don't know about .1 segments and so on, and they\n>shouldn't.\n>\n>I think this kind of partition doesn't buy too much. I would really\n>like to have some kind of auto-clustering, but it should be implemented\n>in some upper level; e.g., by leaving some empty space in pages for\n>future tuples, and arranging the whole heap again when it runs out of\n>free space somewhere. Note that this is very far from the storage\n>manager.\n>\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Thu, 12 Dec 2002 20:06:35 -0500", "msg_from": "\"Charles H. Woloszynski\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "\nOn Thu, 12 Dec 2002, Alvaro Herrera wrote:\n\n> On Thu, Dec 12, 2002 at 04:03:47PM -0800, Stephan Szabo wrote:\n> > On Thu, 12 Dec 2002, johnnnnnn wrote:\n> >\n> > > I think the code changes would be complicated. Just at a 30-second\n> > > consideration, this would need to touch:\n> > > - all sql (selects, inserts, updates, deletes)\n> > > - vacuuming\n> > > - indexing\n> > > - statistics gathering\n> > > - existing clustering\n> >\n> > I think his idea was to treat it similarly to the way that the\n> > system treats tables >2G with .N files. The only thing is that\n> > I believe the code that deals with that wouldn't be particularly\n> > easy to change to do it though, but I've only taken a cursory look at\n> > what I think is the place that does that(storage/smgr/md.c). Some sort of\n> > good partitioning system would be nice though.\n>\n> I don't think this is doable without a huge amount of work. The storage\n> manager doesn't know anything about what is in a page, let alone a\n> tuple. And it shouldn't, IMHO. Upper levels don't know how are pages\n> organized in disk; they don't know about .1 segments and so on, and they\n> shouldn't.\n\nWhich is part of why I said it wouldn't be easy to change to do that,\nthere's no good way to communicate that information. Like I said, I\ndidn't look deeply, but I had to look though, because you can never tell\nwith bits of old university code to do mostly what you want that haven't\nbeen exercised in years floating around.\n\n> I think this kind of partition doesn't buy too much. I would really\n> like to have some kind of auto-clustering, but it should be implemented\n> in some upper level; e.g., by leaving some empty space in pages for\n> future tuples, and arranging the whole heap again when it runs out of\n> free space somewhere. Note that this is very far from the storage\n> manager.\n\nAuto clustering would be nice.\n\nI think Jean-Luc's suggested partitioning mechanism has certain usage\npatterns that it's a win for and most others that it's not. Since the\nusage pattern I can think of (very large table with a small number of\nbreakdowns where your conditions are primarily on those breakdowns) aren't\neven remotely in the domain of things I've worked with, I can't say\nwhether it'd end up really being a win to avoid the index reads for the\ntable.\n\n", "msg_date": "Thu, 12 Dec 2002 18:11:50 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "Stephan,\n\nSomeone commented earlier about the separation/abstraction of the\nstorage manager.\nI agree that it should not be done at the storage level.\n\nMaybe a better idea, would be to create a new pg_partition table that\nwould have the functionality of an index on the key field and also be\nused to point to a file/table ID.\n\nThat would be alot more work to code on thet planner though.\n\nIf a newly inherited table could also inherite the constraints and\nindecies of its parent maybe things would be easier.\n\nJLL\n\n\nStephan Szabo wrote:\n> \n> On Thu, 12 Dec 2002, johnnnnnn wrote:\n> \n> > On Thu, Dec 12, 2002 at 05:39:44PM -0500, Jean-Luc Lachance wrote:\n> > > Let's create a new command:\n> > >\n> > > PARTITION <table> ON <attribute>\n> > <snip>\n> > > Because the table file is already extended (2G limit) using\n> > > different files extension (.N)\n> > > how complicated (modifying the code) would it be to have the table\n> > > files split according to the cluster key?\n> >\n> \n> > I think the code changes would be complicated. Just at a 30-second\n> > consideration, this would need to touch:\n> > - all sql (selects, inserts, updates, deletes)\n> > - vacuuming\n> > - indexing\n> > - statistics gathering\n> > - existing clustering\n> \n> I think his idea was to treat it similarly to the way that the\n> system treats tables >2G with .N files. The only thing is that\n> I believe the code that deals with that wouldn't be particularly\n> easy to change to do it though, but I've only taken a cursory look at\n> what I think is the place that does that(storage/smgr/md.c). Some sort of\n> good partitioning system would be nice though.\n> \n> > create table u1 (...);\n> > create table u2 (...);\n> > create table u3 (...);\n> >\n> > create view uv as (select \"A\" as partition_key, ... from u1\n> > union all\n> > select \"B\" as partition_key, ... from u2\n> > union all\n> > select \"C\" as partition_key, ... from u3);\n> >\n> > That keeps the tables in different files on-disk while still allowing\n> > you to query against all of them. You need to index them separately\n> > and logic is necessary when changing data.\n> \n> Unfortunately, I think that the optimizer isn't going to do what you'd\n> hope here and scan only the appropriate table if you were to say\n> partition_key='A' and foo='bar'. I'd love to be shown that I'm wrong, but\n> the best I could see hoping for would be that if partition_key was part of\n> u1-u3 and there was an index on partition_key,foo that it could use that\n> and do minimal work on the other tables.\n> \n> In addition, doing something like the above is a nightmare if you don't\n> know beforehand what the partitions should be (for example if you know\n> there aren't alot of distinct values, but you don't know what they are) or\n> for that matter even with 10-15 partitions, writing the rules and such\n> would probably be really error prone.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Fri, 13 Dec 2002 11:42:25 -0500", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] CLUSTER command" }, { "msg_contents": "\nI tried to reproduce the problem here but it seems my python is too old.\nI am CC'ing this to the interfaces list in case someone there knows or\ncan test it.\n\n---------------------------------------------------------------------------\n\nMichael Engelhart wrote:\n> Hi -\n> I've been running PostgreSQL 7.3 on Mac OS X 10.2 since it was released \n> and it's been running fine. I'm using pyPgSQL 2.3 for client side \n> programming which also was working great until tonight. Now whenever \n> I do any query of any type, I get warnings like this:\n> \n> WARNING: PerformPortalClose: portal \"pgsql_00179f10\" not found\n> \n> It \"appears\" that everything is still working the way it was but it's a \n> bit discomforting to have these show up on my screen while running my \n> applications.\n> \n> Anyone that can explain this?\n> \n> Here's a tiny bit of Python sample code that I used to make sure it \n> wasn't my other code causing the problems\n> \n> from pyPgSQL import PgSQL\n> \n> dbname = \"template1\"\n> conn = PgSQL.connect(database=dbname)\n> cursor = conn.cursor()\n> sql = \"SELECT now()\";\n> cursor.execute(sql)\n> res = cursor.fetchall()\n> for i in res:\n> \tprint i\n> cursor.close()\n> conn.commit()\n> \n> strangely if I remove the last 2 lines (cursor.close() and \n> conn.commit()) I don't get the errors.\n> \n> Also I don't notice that I don't have this problem with psql command \n> line either. Is this the Python API causing this?\n> \n> Thanks for any help\n> \n> Mike\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 14 Dec 2002 17:58:30 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] PerformPortalClose warning in 7.3" }, { "msg_contents": "Michael Engelhart wrote in gmane.comp.db.postgresql.general:\n> Hi -\n> I've been running PostgreSQL 7.3 on Mac OS X 10.2 since it was released \n> and it's been running fine. I'm using pyPgSQL 2.3 for client side \n> programming which also was working great until tonight. Now whenever \n> I do any query of any type, I get warnings like this:\n> \n> WARNING: PerformPortalClose: portal \"pgsql_00179f10\" not found\n> \n> It \"appears\" that everything is still working the way it was but it's a \n> bit discomforting to have these show up on my screen while running my \n> applications.\n> \n> Anyone that can explain this?\n> \n> Here's a tiny bit of Python sample code that I used to make sure it \n> wasn't my other code causing the problems\n> \n> from pyPgSQL import PgSQL\n> \n> dbname = \"template1\"\n> conn = PgSQL.connect(database=dbname)\n> cursor = conn.cursor()\n> sql = \"SELECT now()\";\n> cursor.execute(sql)\n> res = cursor.fetchall()\n> for i in res:\n> \tprint i\n> cursor.close()\n> conn.commit()\n\nActually, pyPgSQL is using PostgreSQL portals behind your back. This\nis a feature!\n\nTo show this, we use the undocumented, but very handy toggleShowQuery\nflag. The effect is that we can see what SQL pyPgSQL sends to the\nbackend using libpq (the lines staring with QUERY: below):\n\n#v+\ngerhard@gargamel:~$ python\nPython 2.2.2 (#1, Nov 30 2002, 23:19:58) \n[GCC 2.95.4 20020320 [FreeBSD]] on freebsd4\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from pyPgSQL import PgSQL\n>>> con = PgSQL.connect()\n>>> con.conn.toggleShowQuery\n'On'\n>>> cursor = con.cursor()\nQUERY: BEGIN WORK\n>>> cursor.execute(\"select * from test\")\nQUERY: DECLARE \"PgSQL_0811F1EC\" CURSOR FOR select * from test\nQUERY: FETCH 1 FROM \"PgSQL_0811F1EC\"\nQUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 23\nQUERY: SELECT typname, -1 , typelem FROM pg_type WHERE oid = 1043\n>>> result = cursor.fetchmany(5)\nQUERY: FETCH 4 FROM \"PgSQL_0811F1EC\"\n>>> result\n[[None, 'A'], [None, 'B'], [None, 'C'], [None, 'F'], [None, 'F']]\n>>> con.commit()\nQUERY: CLOSE PgSQL_0811F1EC\nQUERY: COMMIT WORK\n>>> \n#v-\n\nThis gives me a warning like this:\n\n#v+\nWARNING: PerformPortalClose: portal \"pgsql_0811f1ec\" not found\n#v-\n\nAs far as I can see, the SQL pyPgSQL emits is perfectly ok. But I'd be\nglad to hear a clarification.\n\n> strangely if I remove the last 2 lines (cursor.close() and \n> conn.commit()) I don't get the errors.\n> \n> Also I don't notice that I don't have this problem with psql command \n> line either. Is this the Python API causing this?\n\nIf you use the same SQL statements using portals in psql, you get the\nsame warning (obviously). I just tried.\n\nGerhard (pyPgSQL developer)\n-- \nFavourite database: http://www.postgresql.org/\nFavourite programming language: http://www.python.org/\nCombine the two: http://pypgsql.sf.net/\nEmbedded database for Python: http://pysqlite.sf.net/\n\n\n", "msg_date": "Mon, 23 Dec 2002 01:18:46 +0000 (UTC)", "msg_from": "Gerhard Haering <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PerformPortalClose warning in 7.3" }, { "msg_contents": "Gerhard Haering <[email protected]> writes:\n> To show this, we use the undocumented, but very handy toggleShowQuery\n> flag. The effect is that we can see what SQL pyPgSQL sends to the\n> backend using libpq (the lines staring with QUERY: below):\n\n\n> QUERY: DECLARE \"PgSQL_0811F1EC\" CURSOR FOR select * from test\n> ...\n> QUERY: CLOSE PgSQL_0811F1EC\n\nThis looks like a pyPgSQL bug to me. If it's going to use a mixed-case\nname for the cursor then it must either always double-quote the name or\nnever do so. Failing to double-quote in the CLOSE command is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 22 Dec 2002 21:36:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PerformPortalClose warning in 7.3 " } ]
[ { "msg_contents": "Folks:\n \nI had a new question from a client: is it possible to \"cap\" CPU usage\n for PostgreSQL running on Linux? They don't care if the procedure\n degrades Postgres performance, but they can't afford to have Postgres\n take up more than 30% of processor for more than 400 milliseconds\n(they\n are running some real-time operations).\n \nI can't imagine that postmaster could do this, but I thought it there\n might be some kind of Linux Kernel CPU quota option I haven't heard\nof.\n �Can anybody point me in the right direction?\n \n-Josh Berkus\n", "msg_date": "Wed, 11 Dec 2002 10:12:21 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Capping CPU usage?" }, { "msg_contents": "On Wed, 2002-12-11 at 13:12, Josh Berkus wrote:\n> Folks:\n> \n> I had a new question from a client: is it possible to \"cap\" CPU usage\n> for PostgreSQL running on Linux? They don't care if the procedure\n> degrades Postgres performance, but they can't afford to have Postgres\n> take up more than 30% of processor for more than 400 milliseconds\n> (they\n> are running some real-time operations).\n> \n> I can't imagine that postmaster could do this, but I thought it there\n> might be some kind of Linux Kernel CPU quota option I haven't heard\n> of.\n> Can anybody point me in the right direction?\n\nDon't know about Linux, but BSD cannot do that. CPU limits are hard --\nonce you hit it it'll dump the process.\n\nAnyway, would it be sufficient to simply reduce the priority of the\nprocess?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "11 Dec 2002 13:24:47 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage?" }, { "msg_contents": "\nOn Wednesday, December 11, 2002, at 12:12 PM, Josh Berkus wrote:\n> I can't imagine that postmaster could do this, but I thought it there\n> might be some kind of Linux Kernel CPU quota option I haven't heard\n> of.\n>  Can anybody point me in the right direction?\n>\n\nYou can always use nice(1) to lower it's priority. This would allow \nother processes to get the CPU more often, effectively limiting it in \nthe face of more demanding processes.\n\nulimit has a CPU time option, but it's probably not what you want. I \ndon't believe there is a kernel option for such a thing. I don't \nrecall seeing this type of accounting anywhere, but there are likely \nsome patches.\n\nCory 'G' Watson\n\n", "msg_date": "Wed, 11 Dec 2002 12:27:45 -0600", "msg_from": "Cory 'G' Watson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage?" }, { "msg_contents": " \n> > I can't imagine that postmaster could do this, but I thought it there\n> > might be some kind of Linux Kernel CPU quota option I haven't heard\n> > of.\n> > Can anybody point me in the right direction?\n\n\nI was reading an interview last night (found from /.) on the O(1) scheduler. One thing that was mentioned was batch tasks which get only cpu that's not being used for other things, in blocks of 3 seconds. It has some harder enforcement of nice levels (i.e. batch @ 10 can completely prevent a batch @ 15 from running untill it completes, but is completely interruptable by ordinary processes). Since all the parameters are tweakable, many while running, this may be a place to look. \n\neric\n\n\n\n", "msg_date": "Wed, 11 Dec 2002 10:44:39 -0800", "msg_from": "eric soroos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage?" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n>> I had a new question from a client: is it possible to \"cap\" CPU usage\n>> for PostgreSQL running on Linux?\n\n> Anyway, would it be sufficient to simply reduce the priority of the\n> process?\n\nIf the issue is to prevent Postgres *as a whole* from hogging CPU usage,\nI would think that nice-ing the postmaster at launch would work\nbeautifully. Requests like \"I want Postgres to use no more than 30%\nof CPU\" make no sense to me: if the CPU is otherwise idle, why should\nyou insist on reserving 70% of it for the idle loop?\n\nBut what we commonly see is \"I want to cap the resource usage of this\nparticular query\", and that is a whole lot harder. You cannot win by\nnice-ing one single backend, because of priority-inversion concerns.\n(The queries you would like to be high-priority might be blocked waiting\nfor locks held by low-priority backends.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Dec 2002 00:52:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage? " }, { "msg_contents": "Tom,\n\n> If the issue is to prevent Postgres *as a whole* from hogging CPU\n> usage,\n> I would think that nice-ing the postmaster at launch would work\n> beautifully. Requests like \"I want Postgres to use no more than 30%\n> of CPU\" make no sense to me: if the CPU is otherwise idle, why should\n> you insist on reserving 70% of it for the idle loop?\n\n<grin> That's what I asked the person who asked me. Apparently, they\nwant to do real-time operations without forking out for a real-time OS.\n My response was \"you can nice the postmaster, and simplify your\nqueries, but that's about it\".\n\nThank you, everybody, for confirming this.\n\n-Josh Berkus\n", "msg_date": "Thu, 12 Dec 2002 09:43:25 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Capping CPU usage? " }, { "msg_contents": "On Thu, 2002-12-12 at 11:43, Josh Berkus wrote:\n> Tom,\n> \n> > If the issue is to prevent Postgres *as a whole* from hogging CPU\n> > usage,\n> > I would think that nice-ing the postmaster at launch would work\n> > beautifully. Requests like \"I want Postgres to use no more than 30%\n> > of CPU\" make no sense to me: if the CPU is otherwise idle, why should\n> > you insist on reserving 70% of it for the idle loop?\n> \n> <grin> That's what I asked the person who asked me. Apparently, they\n> want to do real-time operations without forking out for a real-time OS.\n> My response was \"you can nice the postmaster, and simplify your\n> queries, but that's about it\".\n\nMaybe, even with processes niced down low, the current Linux scheduler\n\"drop down\" a currently schueduled/executing process. \n\nMaybe that would change with the low-latency patches, or with the\nO(1) scheduler in kernel 2.6.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "12 Dec 2002 11:57:57 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage?" } ]
[ { "msg_contents": "Folks,\n\nI am hoping to start a thread where users post their experiences with\nvarious RAID and SCSI controllers running Postgres. When completed,\nI'll post it somewhere on Techdocs with a big disclaimer. I'll start it\noff:\n\nMYLEX AcceleRAID 170: Not supported under Linux 2.4 kernels.\n Performance under RAID 5 with 3 Maxtor UW SCSI disks good on read\noperations (slightly better than a single SCSI disk) but on large write\noperations poor, similar to low-end IDE disks in having disk-acccess\nbottlenecks. Suspected in our installation of locking up on very large\nsimultaneous read/write operations, such as data tranformations on\ntables over 1 million records. (cause of lockup not firmly determined\nyet). (Josh Berkus 11/2002)\n\n-Josh Berkus\n", "msg_date": "Wed, 11 Dec 2002 13:49:20 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Good/Bad RAID and SCSI controllers?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> I am hoping to start a thread where users post their experiences with\n> various RAID and SCSI controllers running Postgres. When completed,\n> I'll post it somewhere on Techdocs with a big disclaimer. I'll start it\n> off:\n\nSounds like a really good idea. There's already the beginnings of a page on Techdocs for this too. ;-)\n\nHere's two thoughts that might be helpful, although they're not RAID.\n\nAdvansys UW SCSI controller: Brain damaged. Won't let standard Seagate Cheetah 10k RPM drives operating at all without \nhaving SCSI Disconnection turned off, and speed is forced to a maximum throughput of 6MB/s. 100% not recommended.\n\nAdaptec 29160 Ultra160 controller, BIOS version 3.10.0: Seems nice. Everything works well, most stuff is automatically \nconfigured, supported by just about everything. Haven't done throughput benchmarks though.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> MYLEX AcceleRAID 170: Not supported under Linux 2.4 kernels.\n> Performance under RAID 5 with 3 Maxtor UW SCSI disks good on read\n> operations (slightly better than a single SCSI disk) but on large write\n> operations poor, similar to low-end IDE disks in having disk-acccess\n> bottlenecks. Suspected in our installation of locking up on very large\n> simultaneous read/write operations, such as data tranformations on\n> tables over 1 million records. (cause of lockup not firmly determined\n> yet). (Josh Berkus 11/2002)\n> \n> -Josh Berkus\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Thu, 12 Dec 2002 09:06:00 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good/Bad RAID and SCSI controllers?" }, { "msg_contents": "Justin,\n\n> Sounds like a really good idea. There's already the beginnings of a\n> page on Techdocs for this too. ;-)\n\nWhere? I don't see it.\n\n-Josh\n\n", "msg_date": "Wed, 11 Dec 2002 14:26:29 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Good/Bad RAID and SCSI controllers?" }, { "msg_contents": "On Thu, 12 Dec 2002, Justin Clift wrote:\n\n> Josh Berkus wrote:\n> > Folks,\n> > \n> > I am hoping to start a thread where users post their experiences with\n> > various RAID and SCSI controllers running Postgres. When completed,\n> > I'll post it somewhere on Techdocs with a big disclaimer. I'll start it\n> > off:\n> \n> Sounds like a really good idea. There's already the beginnings of a page on Techdocs for this too. ;-)\n> \n> Here's two thoughts that might be helpful, although they're not RAID.\n> \n> Advansys UW SCSI controller: Brain damaged. Won't let standard Seagate Cheetah 10k RPM drives operating at all without \n> having SCSI Disconnection turned off, and speed is forced to a maximum throughput of 6MB/s. 100% not recommended.\n> \n> Adaptec 29160 Ultra160 controller, BIOS version 3.10.0: Seems nice. Everything works well, most stuff is automatically \n> configured, supported by just about everything. Haven't done throughput benchmarks though.\n\nI'll throw a vote in behind the SymBIOS / LSI logic cards. They are quite \nstable and reliable, and generally faster than most other cards. I've got \nan UW symbios card at home I'll have to truck into work to play with so I \ncan compare it to my Adaptecs here.\n\nI picked it up on Ebay (the symbios card) for $30, and it had a network \ninterface on it too, but the guy didn't know what kind it was. Turned out \nto be gig ethernet interface with the yellowfin chipset. not a bad deal, \nwhen you think about it. poor thing gets to run my scanner, a tape drive, \nand an old Plextor 12 Plex CDROM drive. I'd like to hook up something \nwith the gigabit nic someday while it's still considered somewhat fast. \n:-)\n\nFor insight into the SCSI cards that Linux supports and what the \nmaintainers think, I highly recommend a tour of the driver source code \nfiles. It's amazing how often the words \"brain damaged\" and \"piece of \ncrap\" show up there.\n\n", "msg_date": "Wed, 11 Dec 2002 15:34:27 -0700 (MST)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good/Bad RAID and SCSI controllers?" }, { "msg_contents": "Josh Berkus wrote:\n> Justin,\n> \n> \n>>Sounds like a really good idea. There's already the beginnings of a\n>>page on Techdocs for this too. ;-)\n> \n> \n> Where? I don't see it.\n\nWas thinking about this:\n\nhttp://techdocs.postgresql.org/guides/DiskTuningGuide\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n\n> -Josh\n> \n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Thu, 12 Dec 2002 09:50:39 +1100", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good/Bad RAID and SCSI controllers?" }, { "msg_contents": "On 11 Dec 2002 at 15:34, scott.marlowe wrote:\n\n> On Thu, 12 Dec 2002, Justin Clift wrote:\n> > Advansys UW SCSI controller: Brain damaged. Won't let standard Seagate Cheetah 10k RPM drives operating at all without \n> > having SCSI Disconnection turned off, and speed is forced to a maximum throughput of 6MB/s. 100% not recommended.\n> > \n> > Adaptec 29160 Ultra160 controller, BIOS version 3.10.0: Seems nice. Everything works well, most stuff is automatically \n> > configured, supported by just about everything. Haven't done throughput benchmarks though.\n> \n> I'll throw a vote in behind the SymBIOS / LSI logic cards. They are quite \n> stable and reliable, and generally faster than most other cards. I've got \n> an UW symbios card at home I'll have to truck into work to play with so I \n> can compare it to my Adaptecs here.\n> \n> I picked it up on Ebay (the symbios card) for $30, and it had a network \n> interface on it too, but the guy didn't know what kind it was. Turned out \n> to be gig ethernet interface with the yellowfin chipset. not a bad deal, \n> when you think about it. poor thing gets to run my scanner, a tape drive, \n> and an old Plextor 12 Plex CDROM drive. I'd like to hook up something \n> with the gigabit nic someday while it's still considered somewhat fast. \n> :-)\n\nRight now page on techdocs is pretty thin on such details. I suggest these \nauthors to put this information(barring humour etc. Just experiences) on that \ndocument.\n\nSecondly I see my name there as contributor but I do not recall any \ncontribution. Anyway since I would like to have my name there, I will put some \ninfo there as well.\n\nBye\n Shridhar\n\n--\nRules for driving in New York:\t(1) Anything done while honking your horn is \nlegal.\t(2) You may park anywhere if you turn your four-way flashers on.\t(3) A \nred light means the next six cars may go through the\t intersection.\n\n", "msg_date": "Thu, 12 Dec 2002 12:43:02 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Good/Bad RAID and SCSI controllers?" } ]
[ { "msg_contents": "Wei,\n\n> I have two tables A and B where A is a huge table with thousands of\n> rows, B is a small table with only a couple of entries.\n>\n> I want to do something like\n>\n> SELECT\n> A.ID\n> A.Name\n> FROM\n> A JOIN B ON (A.ID = B.ID)\n\nYou might consider:\nSELECT A.ID\n A.Name\nFROM A\nWHERE EXISTS (SELECT ID FROM B WHERE B.ID = A.ID)\n\nThis lets the parser know that you are not interested in retrieving\nentire records from B, just those rows from A which correspond to B.\n Run that, and compare the EXPLAIN ANALYZE plan against one which lets\nthe that parser have free reign:\n\nSELECT A.ID, A.Name\nFROM A, B\nWHERE A.ID = B.ID\n\nChances are, the parser will do a better job on the query than you can\ndo by making stuff explicit.\n\nGive it a try.\n\n-Josh Berkus\n", "msg_date": "Wed, 11 Dec 2002 13:53:38 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which of the solution is better?" } ]
[ { "msg_contents": "\nHi:\n\n How long does it take to commit a change to change\nto the database?\n\n I'm currently developing a application where\nresponse time should be fast. Today I notice the\nfollowing in my application log:\n\n [12/10/2002 16:49:52] SQL statement created\n [12/10/2002 16:49:58] Updating OK.\n\nThe SQL statement is a just a stored procedure that\ninsert a single row to a table. 6 seconds is quite a\nlong time to execute an insert statement even if the\ntable has referential integrity constrants and some\ntriggers (the database is small, no tables having more\nthan 100 rows). I tried to recreate the scenario by\ndoing the following at a psql prompt:\n\nbegin;\n\nexplain analyze\nselect\nf_credit_insert('0810030358689',3,121002,402,1096,1654,62550/100\n,'ADXLXDDN',0); -- call the stored procedure\n\nrollback;\n\nThe following is the result of the explain analyze:\npilot=# explain analyze\npilot-# select\nf_credit_insert('0810030358689',3,121002,402,1096,1654,62550/\n,'ADXLXDDN',0);\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..0.01 rows=1 width=0) (actual\ntime=195.95..195.95 rows=1\ns=1)\nTotal runtime: 195.97 msec\n\nNOTICE: UPDATING fsphdr from f_ti_fspdetl\nNOTICE: Current points = 625\nNOTICE: INSERTING into sc_add_points from\nf_ti_fspdetl\nNOTICE: date = 20021210 at f_ti_sc_add_points\nNOTICE: time = 1654 at f_ti_sc_add_points\nNOTICE: transtime = 1654 at f_auto_redeem\nNOTICE: transdate = 20021210 at f_auto_redeem\nNOTICE: balance = 1250\nNOTICE: points needed to redeem = 5000\nNOTICE: Lack the points to merit an auto-redemption\nin f_auto_redeem\n\n\n Since the database is not yet in \"full production\"\nmode. I put NOTOICEs to help me debug.\n\n I can only think of the following reasons why it\ntook 5 seconds to execute the sql statements in a C++\napplication using libpq while it took 195.67 ms. :\n a) NOTICEs are also written to /var/log/messages so\nit can take some time. Does size of the\n/var/log/messages affect the time to execute stored\nprocedures having NOTICE statements?\n b) Connection time overhead.\n c) RAID 5.\n\n There not much concurrent connection at that time (5\nusers at most concurrently connected during that time)\n\n One of the factor that I can't tell is the time it\ntakes to commit that particular transaction. Are there\nways to approximate the time to commit the changes\ngiven the time it take execute that particular sql\nstatement (I'm assuming that there is only 1 SQL\nstatement in that particular transaction).\n\n Anybody has a idea why it took that long to commit?\nMy setup is a Pentium 4 with RAID 5. My version of\npostgresql is 7.2.2\n\n\nThank you very much,\n\nludwig.\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Wed, 11 Dec 2002 23:35:29 -0800 (PST)", "msg_from": "Ludwig Lim <[email protected]>", "msg_from_op": true, "msg_subject": "Time to commit a change" }, { "msg_contents": "On 11 Dec 2002 at 23:35, Ludwig Lim wrote:\n> How long does it take to commit a change to change\n> to the database?\n\nShoudln't be long actually..\n\n> [12/10/2002 16:49:52] SQL statement created\n> [12/10/2002 16:49:58] Updating OK.\n> \n> The SQL statement is a just a stored procedure that\n> insert a single row to a table. 6 seconds is quite a\n> long time to execute an insert statement even if the\n> table has referential integrity constrants and some\n> triggers (the database is small, no tables having more\n> than 100 rows). I tried to recreate the scenario by\n> doing the following at a psql prompt:\n\nI don't believe it would take so long. Last time I benchmarked postgresql on \nmandrake 8.2, I was able to insert/update/delete in 210-240ms on average. I was \nbenhmarking a server application on a lowly P-III-450 with 256MB RAM and IDE \ndisk.\n\nI put 30 clients on that and still excecution time was 240ms. But since there \nwere 20 clients I was getting 240/30=8ms on an average thorughput.\n\nAll the inserts/updates/deletes were in single transaction as well and tables \nwere small 100-1000 rows. \n\n> a) NOTICEs are also written to /var/log/messages so\n> it can take some time. Does size of the\n> /var/log/messages affect the time to execute stored\n> procedures having NOTICE statements?\n> b) Connection time overhead.\n> c) RAID 5.\n\nI don't think any of these matters. What explain throws out is an estimate and \nit might be wrong as well.\n\n> One of the factor that I can't tell is the time it\n> takes to commit that particular transaction. Are there\n> ways to approximate the time to commit the changes\n> given the time it take execute that particular sql\n> statement (I'm assuming that there is only 1 SQL\n> statement in that particular transaction).\n\nYes. Try something like this in C/C++\n\ngettimeofday\nbegin\ntransact\ngettimeofday\ncommit\ngettimeofday.\n\nI am certain it will be in range of 200-250ms. Couldn't get it below that on a \nnetwork despite of pooled connections..\n\nI am not sure second gettimeofday will be of any help but first and third will \ndefinitely give you an idea.\n\n \n> Anybody has a idea why it took that long to commit?\n> My setup is a Pentium 4 with RAID 5. My version of\n> postgresql is 7.2.2\n\nI would put that to 200ms if client and server on same machine. Let us know \nwhat it turns out..\n\nHTH\n\nBye\n Shridhar\n\n--\nJim Nasium's Law:\tIn a large locker room with hundreds of lockers, the few \npeople\tusing the facility at any one time will all have lockers next to\teach \nother so that everybody is cramped.\n\n", "msg_date": "Thu, 12 Dec 2002 13:37:34 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to commit a change" }, { "msg_contents": "On Wed, 11 Dec 2002, Ludwig Lim wrote:\n\n>\n> Hi:\n>\n> How long does it take to commit a change to change\n> to the database?\n>\n> I'm currently developing a application where\n> response time should be fast. Today I notice the\n> following in my application log:\n>\n> [12/10/2002 16:49:52] SQL statement created\n> [12/10/2002 16:49:58] Updating OK.\n>\n> The SQL statement is a just a stored procedure that\n> insert a single row to a table. 6 seconds is quite a\n> long time to execute an insert statement even if the\n> table has referential integrity constrants and some\n> triggers (the database is small, no tables having more\n> than 100 rows). I tried to recreate the scenario by\n> doing the following at a psql prompt:\n\nWas this run while anything else was hitting the database\nor just by itself? I'd wonder if there were any lock\ncontentions (for example on foreign keys) or anything\nlike that which might have had some effect.\n\n", "msg_date": "Thu, 12 Dec 2002 09:15:43 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to commit a change" }, { "msg_contents": "Ludwig,\n\n> Anybody has a idea why it took that long to commit?\n> My setup is a Pentium 4 with RAID 5. My version of\n> postgresql is 7.2.2\n\nDisk contention is also a very possible issue. I'd suggest trying the\nsame test when you are certain that no other disk activity is\nhappening. I've seen appalling wait times for random writes on some\nRAID5 controllers.\n\nAlso, how about publishing the text of the function?\n\nWhat controller are you using? How many dirves, of what type?\n\n-Josh Berkus\n", "msg_date": "Thu, 12 Dec 2002 09:49:08 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Time to commit a change" } ]
[ { "msg_contents": "The manual is pretty sparse on advice regarding indices. Plenty of\ngood feature documentation, but not much about when and where an index\nis appropriate (except a suggestion that multi-column indices should \nbe avoided).\n\nOf course, the ultimate arbiter of which indices are used is the\nplanner/optimizer. If i could somehow convince the optimizer to\nconsider indices that don't yet exist, it could tell me which would\ngive the greatest benefit should i add them.\n\nSo, i'm writing for two reasons. First, i want to gauge interest in\nthis tool. Is this something that people would find useful?\n\nSecond, i am looking to solicit some advice. Is this project even\nfeasible? If so, where would be the best place to start? My assumption\nhas been that i would need to hack into the current code for\ndetermining index paths, and spoof it somehow, but is that possible\nwithout actually creating the indices?\n\nAny and all feedback welcome.\n\n-johnnnnnnnnnn\n", "msg_date": "Thu, 12 Dec 2002 21:22:38 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": true, "msg_subject": "automated index suggestor -- request for comment" }, { "msg_contents": "On Fri, 2002-12-13 at 03:22, johnnnnnn wrote:\n> The manual is pretty sparse on advice regarding indices. Plenty of\n> good feature documentation, but not much about when and where an index\n> is appropriate (except a suggestion that multi-column indices should \n> be avoided).\n> \n> Of course, the ultimate arbiter of which indices are used is the\n> planner/optimizer. If i could somehow convince the optimizer to\n> consider indices that don't yet exist, it could tell me which would\n> give the greatest benefit should i add them.\n\nthe generated index names should be self-explaining or else we would\nhave to change EXPLAIN output code as well, just to tell what the actual\nindex definition was.\n\nThat could become the EXPLAIN SPECULATE command ?\n\n> So, i'm writing for two reasons. First, i want to gauge interest in\n> this tool. Is this something that people would find useful?\n\nSure it would be helpful.\n\n> Second, i am looking to solicit some advice. Is this project even\n> feasible?\n\nAs tom recently wrote on this list, no statistics is _gathered_ base on\nexistence of indexes, so pretending that they are there should be\nlimited just to planner changes plus a way to tell the planner to do it.\n\n> If so, where would be the best place to start? My assumption\n> has been that i would need to hack into the current code for\n> determining index paths, and spoof it somehow, but is that possible\n> without actually creating the indices?\n\nEither with or without real indexes, it's all just code ;)\n\nIn worst case you could generate the entries in pg_class table without\nbuilding the actual index and then drop or rollback when the explain is\nready.\n\nOf course you could just determine all possibly useful indexes and\ngenerate then anyhow an then drop them if they were not used ;)\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "13 Dec 2002 12:05:36 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automated index suggestor -- request for comment" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> That could become the EXPLAIN SPECULATE command ?\n\n[ snicker... ] Seriously, it wouldn't be hard to inject a slew of phony\nindex definitions into the planner to see what it comes up with. You\njust have to cons up an IndexOptInfo record, the planner will be none\nthe wiser. The tricky part is deciding which indexes are even worth\nexpending planner cycles on. (\"Make 'em all\" doesn't seem very\npractical when you consider multi-column or functional indexes.)\n\nAlso, I don't see any reasonable way to automatically suggest partial\nindexes; certainly not on the basis of individual queries.\n\nThe big boys approach this sort of problem with \"workload analysis\"\ntools, which start from a whole collection of sample queries not just\none. I don't think EXPLAIN applied to individual queries can hope to\nproduce similarly useful results.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Dec 2002 09:49:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automated index suggestor -- request for comment " }, { "msg_contents": "On Fri, Dec 13, 2002 at 09:49:53AM -0500, Tom Lane wrote:\n> Hannu Krosing <[email protected]> writes:\n> > That could become the EXPLAIN SPECULATE command ?\n> \n> [ snicker... ] Seriously, it wouldn't be hard to inject a slew of\n> phony index definitions into the planner to see what it comes up\n> with. You just have to cons up an IndexOptInfo record, the planner\n> will be none the wiser. \n\nThat's good news. The easier it is, the more likely i am to actually\nget it working and available to people.\n\n> The tricky part is deciding which indexes are even worth expending\n> planner cycles on. (\"Make 'em all\" doesn't seem very practical when\n> you consider multi-column or functional indexes.)\n\nAgreed. But for a first development iteration, \"Make 'em all\" could\ncertainly include the combinatorial explosion of all single- and\nmulti-column indices. It might be slow as a dog, but it would exist.\n\n> The big boys approach this sort of problem with \"workload analysis\"\n> tools, which start from a whole collection of sample queries not\n> just one. I don't think EXPLAIN applied to individual queries can\n> hope to produce similarly useful results.\n\nAgain, agreed. My intent was to start with something simple which\ncould only deal with one query at a time, and then build a more robust\ntool from that point.\n\nThat said, i wasn't planning on grafting onto the EXPLAIN syntax, but\nrather creating a new SUGGEST command, which could take a query or\neventually a workload file. The other option was to decouple it from\npg proper and have an independent application to live in contrib/ or\ngborg.\n\n-johnnnnnnnnnnn\n", "msg_date": "Fri, 13 Dec 2002 09:20:54 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automated index suggestor -- request for comment" }, { "msg_contents": "On Fri, Dec 13, 2002 at 05:00:32PM +0000, Hannu Krosing wrote:\n> On Fri, 2002-12-13 at 14:56, george young wrote:\n> > > Of course you could just determine all possibly useful indexes\n> > > and generate then anyhow an then drop them if they were not used\n> > > ;)\n> > \n> > Why not! At least for selects, this seems like the ideal. For\n> > insert and update, you have to deal with updating the superfluous\n> > indexes -- does the planner include index updating in its work\n> > estimates?\n\nWell, i had a few reasons i didn't want to *actually* create the\nindices:\n\n1- Disk space. If it's evaluating all indices, including multi-column\nindices, that ends up being a significant space drain.\n\n2- Time. Creating indices can take a while for big tables (again,\nmoreso for multi-column indices).\n\n3- Usability on running systems. If i can eliminate actual index\ncreation, it won't tie up disk access on systems that are already\ndealing with high load.\n\n> At least I think we don't optimize the plan for different index\n> access patterns for updating indexes.\n\nI don't think that's the case either, which makes it more difficult to\nestimate negative cost of index creation. Not sure how i'll deal with\nthat except by (for now) ignoring it.\n\n-johnnnnnnnnnnn\n", "msg_date": "Fri, 13 Dec 2002 09:37:19 -0600", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: automated index suggestor -- request for comment" }, { "msg_contents": "I cc'b back to list, hope this is ok?\n\nOn Fri, 2002-12-13 at 14:56, george young wrote:\n> On 13 Dec 2002 12:05:36 +0000\n> Hannu Krosing <[email protected]> wrote:\n> \n> > On Fri, 2002-12-13 at 03:22, johnnnnnn wrote:\n> > \n> > In worst case you could generate the entries in pg_class table without\n> > building the actual index and then drop or rollback when the explain is\n> > ready.\n> > \n> --> Of course you could just determine all possibly useful indexes and <--\n> --> generate then anyhow an then drop them if they were not used ;) <--\n> \n> Why not! At least for selects, this seems like the ideal. For insert\n> and update, you have to deal with updating the superfluous indexes --\n> does the planner include index updating in its work estimates? \n\nProbably not - the work should be almost the same (modulo cached status\nof index pages) for any plan. \n\nAt least I think we don't optimize the plan for different index access\npatterns for updating indexes.\n\n> > For queries\n> that use functions in the where clause, you'd have to parse enough to know\n> to include indexes on the functions (I know-- the last time I said \"all I\n> have to do is parse ...\" I was really sorry later...).\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "13 Dec 2002 17:00:32 +0000", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automated index suggestor -- request for comment" }, { "msg_contents": "On Thu, 2002-12-12 at 21:22, johnnnnnn wrote:\n> The manual is pretty sparse on advice regarding indices. Plenty of\n> good feature documentation, but not much about when and where an index\n> is appropriate (except a suggestion that multi-column indices should \n> be avoided).\n> \n> Of course, the ultimate arbiter of which indices are used is the\n> planner/optimizer. If i could somehow convince the optimizer to\n> consider indices that don't yet exist, it could tell me which would\n> give the greatest benefit should i add them.\n> \n> So, i'm writing for two reasons. First, i want to gauge interest in\n> this tool. Is this something that people would find useful?\n> \n> Second, i am looking to solicit some advice. Is this project even\n> feasible? If so, where would be the best place to start? My assumption\n> has been that i would need to hack into the current code for\n> determining index paths, and spoof it somehow, but is that possible\n> without actually creating the indices?\n\nIsn't this what a DBA (or, heck, even a modestly bright developer)\ndoes during transactional analysis?\n\nYou know what the INSERTs and statements-that-have-WHERE-clauses\nare, and, hopefully, approximately how often per day (or week)\neach should execute.\n\nThen, *you* make the decision about which single-key and multi-key\nindexes should be created, based upon\na) the cardinality of each table\nb) the frequency each query (includes UPDATE & DELETE) is run\nc) how often INSERT statements occur\n\nThus, for example, an OLTP database will have a significantly\ndifferent mix of indexes than, say, a \"reporting\" database...\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "14 Dec 2002 13:46:16 -0600", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: automated index suggestor -- request for comment" } ]
[ { "msg_contents": "\nJosh.....\n\n> I had a new question from a client: is it possible to \"cap\" CPU usage\n> for PostgreSQL running on Linux?\n\nI remember reading a few months ago about a virtual freebsd OS that\ndivides the memory and cpu up between different users. Although this is\nnot Linux proper (or improper) it is one way of doing it. I searched for\na few minutes and was unable to find the url, it was something like\nvirtualFreeBSD.org.\n\nI've been running on a virtual FreeBSD server for years from iserver, now\nverio. Each user has their own apache conf, sendmail, etc. and they claim\nto divide up memory and cpu usage. I am not sure if virtualFreeBSD is the\nsame or different product and whether it would be useful for you, but it's\nsomething to consider.\n\nbrew\n\n\n", "msg_date": "Fri, 13 Dec 2002 08:27:19 -0500 (EST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Capping CPU usage?" }, { "msg_contents": "Brew, \n\n> I've been running on a virtual FreeBSD server for years from iserver,\n> now\n> verio. Each user has their own apache conf, sendmail, etc. and they\n> claim\n> to divide up memory and cpu usage. I am not sure if virtualFreeBSD\n> is the\n> same or different product and whether it would be useful for you, but\n> it's\n> something to consider.\n\nInteresting idea. Sadly for this client, they are trying to cap CPU\nusage because they are short on system resources, so virtualization is\nnot an option.\n\nHowever, it would be a possiblilty to keep in mind for other projects.\n\n-Josh\n\n\n", "msg_date": "Fri, 13 Dec 2002 10:18:40 -0800", "msg_from": "\"Josh Berkus\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capping CPU usage?" } ]