Discussion:
[GENERAL] Uber migrated from Postgres to MySQL
(too old to reply)
Guyren Howe
2016-07-26 17:39:27 UTC
Permalink
Honestly, I've never heard of anyone doing that. But it sounds like they had good reasons.

https://eng.uber.com/mysql-migration/

Thoughts?
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Joshua D. Drake
2016-07-26 18:05:12 UTC
Permalink
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
Hello,

I started a thread about hackers on this.

Sincerely,

JD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Dorian Hoxha
2016-07-26 18:04:58 UTC
Permalink
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like they
had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-26 21:49:37 UTC
Permalink
This is an old news. They are using mysql as a nosql to store schemaless. Basically one giant blob col. And thats where the role of mysql ends. The bulk of the processing will be in nosql.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Joshua D. Drake
2016-07-26 21:56:42 UTC
Permalink
Post by Rakesh Kumar
This is an old news. They are using mysql as a nosql to store schemaless. Basically one giant blob col. And thats where the role of mysql ends. The bulk of the processing will be in nosql.
That doesn't mean they didn't bring up some very good points.

JD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-26 22:10:40 UTC
Permalink
--
Sent from mobile.
Post by Rakesh Kumar
This is an old news. They are using mysql as a nosql to store schemaless. Basically one giant blob col. And thats where the role of mysql ends. The bulk of the processing will be in nosql.
That doesn't mean they didn't bring up some very good points.

JD

Yes of course. But this is not one of those "product A sucked and we moved to product B and same workload and work is now 10 times better. "
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Condor
2016-07-27 07:15:07 UTC
Permalink
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary
and library, need to do full dump and restore that take time and disk
space.


Regards,
Hristo S.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Chris Travers
2016-07-27 07:17:58 UTC
Permalink
Just a few points on reading this.

First, the timeline bugs regarding replication (particularly iirc in the
9.1 days). I remember accidentally corrupting a (fortunately only
demonstration!) database cluster in the process of demonstrating promotion
at least once. Iirc last time I tried to reproduce these problems, they
had been fixed (by 9.3?).

The replication section made me wonder though if they were using the right
replication solution for the job. If you don't want an on-disk copy, don't
use physical replication. This being said there is one serious issue here
that is worth mentioning, which is that since autovacuum on the master has
no knowledge of autovacuum on the slave, it is easy to have longer-running
queries on a slave that have rows they need to see removed by autovacuum
and replication. This can of course be easily fixed (if your query takes
30 sec to run, every 30 sec open a minute-long transaction on the master,
which means that autovacuum can never clean rows that are older than 30
sec) but such is not a very robust solution and may cause more problems
than it is worth (the real solution is going to a logical replication
system where that is a problem). As I usually put it, streaming
replication is for cases where you need to guarantee an exact replica of
everything, while logical replication is where you need a copy of data for
use.

Finally, if I were trying to create something like schemaless, there is one
major limitation of PostgreSQL that is not mentioned here, which is TOAST
overhead. I have seen people try to do things like this and TOAST overhead
can be a real problem in these cases. If your data for a row won't easily
fit in significantly less than a page, then every read of that data and
every write can effectively do an implicit nested loop join. And if you
want to talk about write amplification...... But this is also very well
hidden and not easy to measure unless you know to look for it specifically
so it is possible that they ran into it and didn't know it but I don't have
any knowledge of what they did or tried so I could be totally off base
here. I would say I have seen more than one project run into this and
because explain analyze select * does not detoast....

All of the above being said, there are solutions to all the major
problems. But you have to know about them, where to look, and what to do.
And with higher scale, one very important aspect is that attention to
detail starts to matter a whole lot. I agree that there are some good
points raised but I wonder what the solutions are. There is room for some
improvement in the backend (it would really be nice to instrument and
measure toasting/detoasting overhead in explain analyze) but for a lot of
these I wonder if that is secondary. PostgreSQL is very well optimized
for a certain series of tasks, and one can build well optimized solutions
well outside that. At a certain point (including a certain scale)
therewill be no substitute for a teamof people who really know the db
backend inside and out who can design around limitations and I think that
is true for all databases I have worked with.
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like they
had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
--
Best Wishes,
Chris Travers

Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more
Marc Fournier
2016-07-27 07:30:29 UTC
Permalink
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary and library, need to do full dump and restore that take time and disk space.
Not having tried to use pg_upgrade on a 1TB database, this might not apply, but pg_upgrade has a —link option that should greatly minimize the time required to upgrade, since it eliminates the duplication of files 
 apparently even works on Windows:

“—link 
 use hard links instead of copying files to the new cluster (use junction points on Windows)"

In fact, the —link option should also mean removing the requirement for ‘double the disk space’ to do the upgrade 


I don’t have a 1TB database to try it on, mind you, so your ‘wait couple of days’ might be *with* the —link option?

--
Marc G Fournier http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services
Achilleas Mantzios
2016-07-27 07:34:10 UTC
Permalink
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary and library, need to do full dump and restore that take time and disk space.
Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like seconds. (with the -k option)
However, be warned that the planing and testing took one full week.
Post by Condor
Regards,
Hristo S.
--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Scott Mead
2016-07-27 14:22:27 UTC
Permalink
On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
Post by Achilleas Mantzios
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Honestly, I've never heard of anyone doing that. But it sounds like
Post by Guyren Howe
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary and
library, need to do full dump and restore that take time and disk space.
Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
seconds. (with the -k option)
However, be warned that the planing and testing took one full week.
That being said, it doesn't really provide a back-out plan. The beauty of
replication is that you can halt the upgrade at any point if need be and
cut your (hopefully small) losses. If you use -k, you are all in. Sure,
you could setup a new standby, stop traffic, upgrade whichever node you'd
like (using -k) and still have the other ready in the event of total
catastrophe. More often than not, I see DBAs and sysads lead the
conversation with "well, postgres can't replicate from one version to
another, so instead.... " followed by a fast-glazing of management's eyes
and a desire to buy a 'commercial database'.

All in all, Evan's blog seemed to start out decently technical, it quickly
took a turn with half-truths, outdated information and, in some cases,
downright fud:

"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."


ISTM that they needed a tire swing
<Loading Image...>
and were using a dump truck. Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.

--Scott
Post by Achilleas Mantzios
Post by Condor
Regards,
Hristo S.
--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt
--
http://www.postgresql.org/mailpref/pgsql-general
--
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com
Bruce Momjian
2016-07-27 15:45:01 UTC
Permalink
That being said, it doesn't really provide a back-out plan.  The beauty of
replication is that you can halt the upgrade at any point if need be and cut
your (hopefully small) losses. If you use -k, you are all in.  Sure, you could
setup a new standby, stop traffic, upgrade whichever node you'd like (using -k)
and still have the other ready in the event of total catastrophe.  More often
than not, I see DBAs and sysads lead the conversation with "well, postgres
can't replicate from one version to another, so instead.... " followed by a
fast-glazing of management's eyes and a desire to buy a 'commercial database'. 
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Geoff Winkless
2016-07-27 15:51:42 UTC
Permalink
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."
ISTM that they needed a tire swing
<http://i0.wp.com/blogs.perficient.com/perficientdigital/files/2011/07/treecomicbig.jpg>
and were using a dump truck. Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.
​
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted in
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring the
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.

Admittedly they weren't data-corruption bugs but they were of the "this
will cause your application to unnecessarily come to a complete halt for a
couple of hours once your dataset grows beyond a certain point" variety.

As others have pointed out, db changes very rarely happen because of
technical reasons. Most developers will harp on at their boss about how
terrible their current database is and how <preferred database> performs
much better. Eventually one of two things happens: either a) those
developers end up in a position where their direct boss is in a position to
make the change and he or she doesn't understand how much time and money it
will actually take to change; or b) commercial considerations dictate the
change.

The amount of money and time they've wasted making this change (and
whatever anyone tells you these things never come for free) would have been
better invested in employing one of the commercial PG companies to improve
the specific postgres problems they found. The fact that they decided to go
the MySQL route suggests to me that this was a political, not technical,
change, and they're now reverse-justifying.

For what it's worth, from what I've read uber are a company whose very
business plan relies on them taking things that they don't deserve while
they treat customers and employees with similar levels of arrogance.
Frankly I'd rather there were as many levels of separation as possible
between me and them: they and Oracle are welcome to each other, it seems
like a marriage made in heaven.

Geoff​
Andrew Sullivan
2016-07-27 16:11:44 UTC
Permalink
Post by Geoff Winkless
technical reasons. Most developers will harp on at their boss about how
terrible their current database is and how <preferred database> performs
much better. Eventually one of two things happens: either a) those
developers end up in a position where their direct boss is in a position to
make the change and he or she doesn't understand how much time and money it
will actually take to change; or b) commercial considerations dictate the
change.
In a different context, someone suggested to me that Postgres
advocates sounded to him too often like FreeBSD advocates complaining
about Linux, and I'm afraid there is a certain truth to that. Given
the discussion in the post in question, the decision to use MySQL
appears to have been well-justified:

1. They'd decided to use a NoSQL database and ditch relational
systems, because shards.

2. They wanted an MVCC engine behind the above.

3. They wanted SQL semantics to this MVCC-enabled filesystem layer.

Sounds just like MySQL+InnoDB to me. Once you've already decided on
(1), the rest of it flows pretty naturally and Postgres is probably
not your choice. You can dismiss any of 1-3 as commerical or
political advocacy, but while I happen to think they're a somewhat
questionable set of goals they're not obviously stupid, and
competent people of good will could disagree about them.

At the same time, there really are two serious problems with Postgres
under heavy write loads. Postgres's focus on readers' speed and
convenience means you have to take the hit somewhere, so writers take
it instead. (The other side of the disk-layout description in the
blog post is that, under MySQL, secondary index use is more expensive
for readers than it is in Postgres. The post acknowledges that, but
of course most important secondary indexing is useless under sharding
anyway, since you have to select from shards; so they won't care.)
I/O storms on Postgres are a major source of pain for large operators,
and the tools for understanding are sort of primitive because many of
them depend on underlying OS features and tools.

The second is the upgrade-by-replica-and-fallback-plan problem. It's
really an issue. There is a reason that, back in the cloudy past, we
designed Slony to be able to replicate to and from any supported
version of Postgres: Afilias needed to be able to upgrade without a
lot of down time and with the ability to roll back if we had to,
because that was our contractual obligation. This has always been a
large gap, and when it was raised in the past the answer was, "Well,
Slony can already do that so use it." It wasn't too satisfying then,
and it's not much more satisfying now. :)
Post by Geoff Winkless
better invested in employing one of the commercial PG companies to improve
the specific postgres problems they found.
I think the two big problems laid out above are deep architectural
ones. I'm not sure these are the sort of improvement you can buy
without getting the community on board.
Post by Geoff Winkless
For what it's worth, from what I've read uber are a company whose very
business plan relies on them taking things that they don't deserve while
they treat customers and employees with similar levels of arrogance.
Nothin' for nothin', but I don't think it helps Postgres to attack
others' business plans -- whatever one thinks of them -- as part of an
argument about why Postgres is the right tool for a given job.

Best regards,

A
--
Andrew Sullivan
***@crankycanuck.ca
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-27 16:33:27 UTC
Permalink
Post by Bruce Momjian
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
doesn't and pg_logical solve this by logically replicating and allowing for
different architecture/version between the replication nodes ?
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Geoff Winkless
2016-07-27 16:33:45 UTC
Permalink
Post by Andrew Sullivan
Given
the discussion in the post in question, the decision to use MySQL
​Well yes, but that's pretty-much the point of back-justification, isn't it?

​[snip a whole bunch of good points]
Post by Andrew Sullivan
Post by Geoff Winkless
For what it's worth, from what I've read uber are a company whose very
business plan relies on them taking things that they don't deserve while
they treat customers and employees with similar levels of arrogance.
Nothin' for nothin', but I don't think it helps Postgres to attack
others' business plans -- whatever one thinks of them -- as part of an
argument about why Postgres is the right tool for a given job.
​Oh, I wasn't using as an argument about anything (hence "for what it's
worth").​

G
Post by Andrew Sullivan
Post by Geoff Winkless
technical reasons. Most developers will harp on at their boss about how
terrible their current database is and how <preferred database> performs
much better. Eventually one of two things happens: either a) those
developers end up in a position where their direct boss is in a position
to
Post by Geoff Winkless
make the change and he or she doesn't understand how much time and money
it
Post by Geoff Winkless
will actually take to change; or b) commercial considerations dictate the
change.
In a different context, someone suggested to me that Postgres
advocates sounded to him too often like FreeBSD advocates complaining
about Linux, and I'm afraid there is a certain truth to that. Given
the discussion in the post in question, the decision to use MySQL
1. They'd decided to use a NoSQL database and ditch relational
systems, because shards.
2. They wanted an MVCC engine behind the above.
3. They wanted SQL semantics to this MVCC-enabled filesystem layer.
Sounds just like MySQL+InnoDB to me. Once you've already decided on
(1), the rest of it flows pretty naturally and Postgres is probably
not your choice. You can dismiss any of 1-3 as commerical or
political advocacy, but while I happen to think they're a somewhat
questionable set of goals they're not obviously stupid, and
competent people of good will could disagree about them.
At the same time, there really are two serious problems with Postgres
under heavy write loads. Postgres's focus on readers' speed and
convenience means you have to take the hit somewhere, so writers take
it instead. (The other side of the disk-layout description in the
blog post is that, under MySQL, secondary index use is more expensive
for readers than it is in Postgres. The post acknowledges that, but
of course most important secondary indexing is useless under sharding
anyway, since you have to select from shards; so they won't care.)
I/O storms on Postgres are a major source of pain for large operators,
and the tools for understanding are sort of primitive because many of
them depend on underlying OS features and tools.
The second is the upgrade-by-replica-and-fallback-plan problem. It's
really an issue. There is a reason that, back in the cloudy past, we
designed Slony to be able to replicate to and from any supported
version of Postgres: Afilias needed to be able to upgrade without a
lot of down time and with the ability to roll back if we had to,
because that was our contractual obligation. This has always been a
large gap, and when it was raised in the past the answer was, "Well,
Slony can already do that so use it." It wasn't too satisfying then,
and it's not much more satisfying now. :)
Post by Geoff Winkless
better invested in employing one of the commercial PG companies to
improve
Post by Geoff Winkless
the specific postgres problems they found.
I think the two big problems laid out above are deep architectural
ones. I'm not sure these are the sort of improvement you can buy
without getting the community on board.
Post by Geoff Winkless
For what it's worth, from what I've read uber are a company whose very
business plan relies on them taking things that they don't deserve while
they treat customers and employees with similar levels of arrogance.
Nothin' for nothin', but I don't think it helps Postgres to attack
others' business plans -- whatever one thinks of them -- as part of an
argument about why Postgres is the right tool for a given job.
Best regards,
A
--
Andrew Sullivan
--
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-27 16:47:24 UTC
Permalink
Post by Rakesh Kumar
Post by Bruce Momjian
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
doesn't and pg_logical solve this by logically replicating and allowing for
different architecture/version between the replication nodes ?
Yes. I was saying I don't know how to improve pg_upgrade to address it.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-27 16:51:40 UTC
Permalink
Post by Bruce Momjian
Yes. I was saying I don't know how to improve pg_upgrade to address it.
This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almost
definitely involves change in transaction log
(WAL) structure and hence no rollback.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-27 16:54:30 UTC
Permalink
Post by Rakesh Kumar
Post by Bruce Momjian
Yes. I was saying I don't know how to improve pg_upgrade to address it.
This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almost
definitely involves change in transaction log
(WAL) structure and hence no rollback.
Oh, good to know.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Chris Travers
2016-07-27 16:54:44 UTC
Permalink
Post by Scott Mead
On Wed, Jul 27, 2016 at 3:34 AM, Achilleas Mantzios <
Post by Achilleas Mantzios
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Honestly, I've never heard of anyone doing that. But it sounds like
Post by Guyren Howe
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade
to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary
and library, need to do full dump and restore that take time and disk space.
Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
seconds. (with the -k option)
However, be warned that the planing and testing took one full week.
That being said, it doesn't really provide a back-out plan. The beauty of
replication is that you can halt the upgrade at any point if need be and
cut your (hopefully small) losses.
Replication though does have limits and one aspect of incremental backups
is you cannot restore from one major version to the next. Another one I
think they obliquely referred to (in the subtle problems section) was the
fact that if you have longer-running queries on the replica with a lot of
updates, you can get funny auto-vacuum-induced errors (writes from
autovacuum on the master can interrupt queries on the slave). BTW if there
is interest in what could be done for that, something which allows
autovacuum to decide how long to wait before cleaning up dead tuples would
be a great enhancement.

I was on a project once where I was told, "we use pg_dump for our upgrades"
for a multi-TB database. When asked why, the answer made a lot of sense.
Namely if something goes wrong you need to do a restore on the new version
from a logical backup anyway, so you have to take a pg_dump backup before
you start, and you might have to restore anyway. So the thinking was that
it was better to keep expectations low than promise low downtime and have a
two-week outage.
Post by Scott Mead
If you use -k, you are all in. Sure, you could setup a new standby, stop
traffic, upgrade whichever node you'd like (using -k) and still have the
other ready in the event of total catastrophe. More often than not, I see
DBAs and sysads lead the conversation with "well, postgres can't replicate
from one version to another, so instead.... " followed by a fast-glazing of
management's eyes and a desire to buy a 'commercial database'.
This is one area where we need better presentation of what we have and what
it does.

Streaming replication works great for certain things, such as where you
have lots of small queries against the replica, where they don't have to be
absolutely up to date, or where what you are really after is guarantees
that you can keep moving after one of your servers suffers a catastrophic
failure.

Where the guarantee that the two systems are guaranteed identical on the
filesystem level, it is great. Where that is not what you want, it is a
pretty bad solution. But then there is Slony, Bucardo, and other logical
replication solutions out there (plus the newer logical replication
approaches in PostgreSQL) which handle the other situations very well (with
a very different sort of added complexity).
Post by Scott Mead
All in all, Evan's blog seemed to start out decently technical, it quickly
took a turn with half-truths, outdated information and, in some cases,
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that this class of bug can happen at all. A new version of Postgres could
be released at any time that has a bug of this nature, and because of the
way replication works, this issue has the potential to spread into all of
the databases in a replication hierarchy."
ISTM that they needed a tire swing
<http://i0.wp.com/blogs.perficient.com/perficientdigital/files/2011/07/treecomicbig.jpg>
and were using a dump truck. Hopefully they vectored somewhere in the
middle and got themselves a nice sandbox.
My first thought was, "If they know the database that well, surely they
could have built something that would work well!"

However, for what they seem to want to do specifically, MySQL might not
actually be a bad choice. In a case like what they are doing, nearly all
of your lookups are probably simple, primary key lookups and there InnoDB's
design helps more than it hurts. If I were to think of one are that MySQL
probably would do better, it would be looking up documents based on simple
primary key searches (no joins, no relational math, no need for complex
plans, just a single primary key index lookup). But this is also a reason
we might not want to worry about this sort of thing too much. Of course
NFS might be another alternative at that level of complexity....

So yeah, a sandbox ;-)
Post by Scott Mead
--Scott
Post by Achilleas Mantzios
Post by Condor
Regards,
Hristo S.
--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt
--
http://www.postgresql.org/mailpref/pgsql-general
--
--
Scott Mead
Sr. Architect
*OpenSCG <http://openscg.com>*
http://openscg.com
--
Best Wishes,
Chris Travers

Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more
Bruce Momjian
2016-07-27 16:59:59 UTC
Permalink
Post by Bruce Momjian
Post by Rakesh Kumar
Post by Bruce Momjian
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
doesn't and pg_logical solve this by logically replicating and allowing for
different architecture/version between the replication nodes ?
Yes. I was saying I don't know how to improve pg_upgrade to address it.
I think long-term we are looking at pg_logical for zero-downtime
upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
want to make a second copy of my data) upgrades (but not downgrades).

I think this is probably the best we are going to be able to do for a
long time.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-27 17:02:40 UTC
Permalink
Post by Chris Travers
The replication section made me wonder though if they were using the right
replication solution for the job.  If you don't want an on-disk copy, don't use
physical replication.  This being said there is one serious issue here that is
worth mentioning, which is that since autovacuum on the master has no knowledge
of autovacuum on the slave, it is easy to have longer-running queries on a
slave that have rows they need to see removed by autovacuum and replication. 
Uh, see hot_standby_feedback:

#hot_standby_feedback = off # send info from standby to prevent
# query conflicts
Post by Chris Travers
All of the above being said, there are solutions to all the major problems. 
But you have to know about them, where to look, and what to do.  And with
higher scale, one very important aspect is that attention to detail starts to
matter a whole lot.  I agree that there are some good points raised but I
wonder what the solutions are.  There is room for some improvement in the
backend (it would really be nice to instrument and measure toasting/detoasting
overhead in explain analyze) but for a lot of these  I wonder if that is
secondary.   PostgreSQL is very well optimized for a certain series of tasks,
and one can build well optimized solutions well outside that.  At a certain
point (including a certain scale) therewill be no substitute for a teamof
people who really know the db backend inside and out who can design around
limitations and I think that is true for all databases I have worked with.
Watching the video was helpful:

https://vimeo.com/145842299

You can see the failover happened because of various user errors. That
doesn't excuse our bug, but I am not sure exactly how much they
understood of Postgres behavior. His talk near the end about the
replication infrastucture being exposed to them was also interesting.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-27 17:04:10 UTC
Permalink
Post by Bruce Momjian
Post by Bruce Momjian
Post by Rakesh Kumar
Post by Bruce Momjian
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
doesn't and pg_logical solve this by logically replicating and allowing for
different architecture/version between the replication nodes ?
Yes. I was saying I don't know how to improve pg_upgrade to address it.
I think long-term we are looking at pg_logical for zero-downtime
upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
want to make a second copy of my data) upgrades (but not downgrades).
I think this is probably the best we are going to be able to do for a
long time.
Oh, let me give credit to Simon, who has always seen pg_logical as
providing superior upgrade options where the logical replication setup
isn't a problem.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Vik Fearing
2016-07-27 17:08:42 UTC
Permalink
Another one I think they obliquely referred to (in the subtle problems
section) was the fact that if you have longer-running queries on the
replica with a lot of updates, you can get funny auto-vacuum-induced
errors (writes from autovacuum on the master can interrupt queries on
the slave). BTW if there is interest in what could be done for that,
something which allows autovacuum to decide how long to wait before
cleaning up dead tuples would be a great enhancement.
You mean something like hot_standby_feedback?

https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
--
Vik Fearing +33 6 46 75 15 36
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Marc Fournier
2016-07-27 17:54:25 UTC
Permalink
Post by Bruce Momjian
I think long-term we are looking at pg_logical for zero-downtime
upgrades and _downgrades_, and pg_upgrade for less overhead (I don't
want to make a second copy of my data) upgrades (but not downgrades).
I think this is probably the best we are going to be able to do for a
long time.
Stupid question here, but do we provide any less then what MySQL does? I’m reading:

http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-paths <http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-paths>

==
Unless otherwise documented, the following downgrade paths are supported:

Downgrading from a release series version to an older release series version is supported using all downgrade methods <http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-methods>. For example, downgrading from 5.7.10 to 5.7.9 is supported. Skipping release series versions is also supported. For example, downgrading from 5.7.11 to 5.7.9 is supported.

Downgrading one release level is supported using the logical downgrade method. For example, downgrading from 5.7 to 5.6 is supported.

Downgrading more than one release level is supported using the logical downgrade method, but only if you downgrade one release level at a time. For example, you can downgrade from 5.7 to 5.6, and then to 5.5.

==

So, downgrade minor releases can be done by just changing the binaries 
 downgrading an older ‘major release’ requires a dump/reload 


Unless I’m missing something, whether on PostgreSQL or MySQL, if you want to go back a major release, you would need to dump./ reload that 1TB database 



--
Marc G Fournier http://www.2ndQuadrant.com <http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, RemoteDBA, Training & Services
Bruce Momjian
2016-07-27 17:57:39 UTC
Permalink
Post by Marc Fournier
http://dev.mysql.com/doc/refman/5.7/en/downgrading.html#downgrade-paths
==
Unless otherwise documented, the following downgrade paths are
• Downgrading from a release series version to an older release series
version is supported using all downgrade methods. For example, downgrading
from 5.7.10 to 5.7.9 is supported. Skipping release series versions is also
supported. For example, downgrading from 5.7.11 to 5.7.9 is supported.
• Downgrading one release level is supported using the logical downgrade
method. For example, downgrading from 5.7 to 5.6 is supported.
• Downgrading more than one release level is supported using the logical
downgrade method, but only if you downgrade one release level at a time.
For example, you can downgrade from 5.7 to 5.6, and then to 5.5.
==
So, downgrade minor releases can be done by just changing the binaries …
downgrading an older ‘major release’ requires a dump/reload …
Unless I’m missing something, whether on PostgreSQL or MySQL, if you want to go
back a major release, you would need to dump./ reload that 1TB database …
What they wanted, and I think was mentioned in the document, was that
they wanted to upgrade the slaves independently, then the master. I
think MySQL supports that, Postgres doesn't.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-27 17:58:25 UTC
Permalink
On Wed, Jul 27, 2016 at 1:54 PM, Marc Fournier
Stupid question here, but do we provide any less then what MySQL does? I’m
mysql provides same functionality for rollback like oracle/db2
provides. That is,
rollback on a minor version upgrade possible, but not on major version upgrade.

I am surprised PG does not even allow minor version rollback.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Andrew Sullivan
2016-07-27 18:07:11 UTC
Permalink
Post by Rakesh Kumar
I am surprised PG does not even allow minor version rollback.
It almost never happens that a minor version (N.M.x, x is minor)
requires an upgrade at all. Change your binaries and you're done.
Catalogue incompatibility historically was the basis for something
becoming a major version upgrade. (I can recall a couple bugs where
you had to tickle the catalogues, so it's not exactly true that
they're never incompatible, but it's incredibly rare.)

Best regards,

A
--
Andrew Sullivan
***@crankycanuck.ca
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-27 18:07:32 UTC
Permalink
Post by Rakesh Kumar
On Wed, Jul 27, 2016 at 1:54 PM, Marc Fournier
Stupid question here, but do we provide any less then what MySQL does? I’m
mysql provides same functionality for rollback like oracle/db2
provides. That is,
rollback on a minor version upgrade possible, but not on major version upgrade.
I am surprised PG does not even allow minor version rollback.
Uh, I thought we did. Perhaps there a few that didn't.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-27 18:10:04 UTC
Permalink
Post by Andrew Sullivan
It almost never happens that a minor version (N.M.x, x is minor)
requires an upgrade at all. Change your binaries and you're done.
Catalogue incompatibility historically was the basis for something
becoming a major version upgrade. (I can recall a couple bugs where
you had to tickle the catalogues, so it's not exactly true that
they're never incompatible, but it's incredibly rare.)
Yeah good enough.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Kevin Grittner
2016-07-27 19:25:41 UTC
Permalink
Post by Condor
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days pg_upgrade to
finish upgrade and meanwhile database is offline.
What? I had a cluster over 3TB and it was offline for only 10
minutes doing a major version upgrade using pg_upgrade's --link
option. Of course, that was with minimal statistics -- just enough
to keep from getting truly stupid plans. If it takes *days* to run
pg_upgrade on a 1TB cluster either you have an insane number of
database objects or you are not taking advantage of the available
features.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Kevin Grittner
2016-07-27 19:33:54 UTC
Permalink
Post by Scott Mead
Post by Achilleas Mantzios
Our last 1TB upgrade from 9.0 -> 9.3 went like a charm in something like
seconds. (with the -k option)
However, be warned that the planing and testing took one full week.
Agreed -- you should spend a lot more time on planning and testing
than the actual upgrade will take; but that will probably be true
with any product.
Post by Scott Mead
That being said, it doesn't really provide a back-out plan.
Until you get to the end of the upgrade and *start the cluster
under the new version* you can fall back to the old version. I
remember a couple times that we saw something during a pg_upgrade
--link run that we weren't expecting, and did exactly that so we
could investigate and try again later.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Andrew Sullivan
2016-07-27 20:06:51 UTC
Permalink
Post by Kevin Grittner
Until you get to the end of the upgrade and *start the cluster
under the new version* you can fall back to the old version.
Yeah, but to be fair a lot of well-funded businesses (note what
started this discussion) are pico-managed by people way up the stack
who want a bite-sized answer. If you do not have a happy story for,
"What if we're 48 hours into the upgrade and discover some critical
bug corner case that makes us need to roll back?" then you're going to
lose them. Never mind that such cases literally never happen (if you
have a 48 hour old bug in an Internet system today, you have an
emergency bugfix, not a rollback).

A great deal of practical delivery of technology involves managing
expectations of management who do not understand what they are asking
for and basically want a glib happy answer. As people delivering such
things, we must find a glib happy answer that does not get us fired if
it turns out to be false. The poor story Postgres has about
downgrade, _even if it's a stupid problem_, is a problem. It might
not be worth fixing because it's a stupid problem. But one has to
face the critique in its own terms.

A
--
Andrew Sullivan
***@crankycanuck.ca
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Patrick B
2016-07-27 21:34:05 UTC
Permalink
We can't use the pg_upgrade in our 3TB database.... just does not work..
that's the main reason we're still using 9.2.
Jason Dusek
2016-07-28 05:52:41 UTC
Permalink
With regards to write amplification, it makes me think about about OIDs.
Used to be, every row had an OID and that OID persisted across row versions.


https://www.postgresql.org/docs/9.5/static/runtime-config-compatible.html#GUC-DEFAULT-WITH-OIDS

Would reintroducing such a feature address some of Uber's concerns about
multiple indexes? It could, and would do so without the implicit
requirement of a foreign key; but it would also require a fast OID to CTID
mapping.
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like they
had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
Scott Marlowe
2016-07-28 13:38:52 UTC
Permalink
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome that
this class of bug can happen at all. A new version of Postgres could be
released at any time that has a bug of this nature, and because of the way
replication works, this issue has the potential to spread into all of the
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump truck. Hopefully
they vectored somewhere in the middle and got themselves a nice sandbox.
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted in
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring the
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.

The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.

As someone who has gotten more than one bug fix from pgsql in less
than 48 hours, I feel sorry for anyone who finds a bug in a MySQL
version they are running in production.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
James Keener
2016-07-28 13:45:19 UTC
Permalink
If we're talking about favourite bug https://bugs.mysql.com/bug.php?id=21153
is mine

Join with many tables hangs mysql (and taking 100% cpu)
the following query hangs the mysql server taking 100% cpu. also an
"explain" of the query hangs the server!
It's "not a bug" because you can change some of the default query planning
parameters to avoid it:

Igor Babaev
This is not a bug.
The reported query is a 18-way join. For such queries we expect that the
full search for the best execution plan will take a significant amount of
time.
At the same due to a specific structure of the reported query we can hope
to get a good execution plan with a limited search (see Manual 5.0: 7.5.3.
Controlling Query Optimizer Performance).
Setting the value of the global variable 'optimizer_search_depth' to 4 or
even to 2 we can get the same execution plan as with a full search. Yet it
To me that speaks volumes. Sure, you can tweak a db params to get better
performance, but I shouldn't have to deviate from the default for it to
simply work at all!

Jim
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that
Post by Geoff Winkless
Post by Scott Mead
this class of bug can happen at all. A new version of Postgres could be
released at any time that has a bug of this nature, and because of the
way
Post by Geoff Winkless
Post by Scott Mead
replication works, this issue has the potential to spread into all of
the
Post by Geoff Winkless
Post by Scott Mead
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump truck.
Hopefully
Post by Geoff Winkless
Post by Scott Mead
they vectored somewhere in the middle and got themselves a nice sandbox.
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted
in
Post by Geoff Winkless
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring
the
Post by Geoff Winkless
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.
The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.
As someone who has gotten more than one bug fix from pgsql in less
than 48 hours, I feel sorry for anyone who finds a bug in a MySQL
version they are running in production.
--
http://www.postgresql.org/mailpref/pgsql-general
Alex Ignatov
2016-07-28 14:38:49 UTC
Permalink
Post by Rakesh Kumar
Post by Bruce Momjian
Yes. I was saying I don't know how to improve pg_upgrade to address it.
This problem is there even in oracle/db2/sqlserver. None of them allow
rollback to the lower version
unless it is a minor version upgrade. Major version upgrade almost
definitely involves change in transaction log
(WAL) structure and hence no rollback.
Sorry, what? You can rollback to previous version of software what you had.

https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-28 14:50:59 UTC
Permalink
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.

Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Alex Ignatov
2016-07-28 14:59:28 UTC
Permalink
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -

https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007

About compatible params you are right but downgrade is possible from 12c.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Igor Neyman
2016-07-28 15:09:32 UTC
Permalink
-----Original Message-----
From: pgsql-general-***@postgresql.org [mailto:pgsql-general-***@postgresql.org] On Behalf Of Alex Ignatov
Sent: Thursday, July 28, 2016 10:59 AM
To: Rakesh Kumar <***@gmail.com>
Cc: PostgreSQL General <pgsql-***@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -

https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007

About compatible params you are right but downgrade is possible from 12c.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres Company

_____________________________________________________________________________

Alex, documentation you point to just proves what Rakesh said:

"For supported releases of Oracle Database, you can downgrade a database to the release from which you last upgraded. For example, if you recently upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."

So, you can downgrade only if you didn't "activate" newer feature (didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, etc...), if you aren't going to use newer features.

Regards,
Igor
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://
Alex Ignatov
2016-07-28 15:26:08 UTC
Permalink
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 10:59 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -
https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007
About compatible params you are right but downgrade is possible from 12c.
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
_____________________________________________________________________________
"For supported releases of Oracle Database, you can downgrade a database to the release from which you last upgraded. For example, if you recently upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."
So, you can downgrade only if you didn't "activate" newer feature (didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, etc...), if you aren't going to use newer features.
Regards,
Igor
Not true again.

From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section
"How the COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE
initialization parameter set to 11.0.0, then it generates database
structures on disk that are compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables
the use of features. If you try to use any new features that make the
database incompatible with the COMPATIBLE initialization parameter, then
an error occurs. However, any new features that do not make incompatible
changes on disk are enabled.

So if feature do not change disk in incompatible way this features is
enable.

Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
:

"This parameter specifies the release with which Oracle must maintain
compatibility. It enables you to take advantage of the maintenance
improvements of a new release immediately in your production systems
without testing the new functionality in your environment. Some features
of the release may be restricted."

So this parameter is made with rollback in mind and you have some new
feature from newer software release.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Igor Neyman
2016-07-28 15:34:19 UTC
Permalink
-----Original Message-----
From: Alex Ignatov [mailto:***@postgrespro.ru]
Sent: Thursday, July 28, 2016 11:26 AM
To: Igor Neyman <***@perceptron.com>; Rakesh Kumar <***@gmail.com>
Cc: PostgreSQL General <pgsql-***@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 10:59 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm
#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -
https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007
About compatible params you are right but downgrade is possible from 12c.
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company
______________________________________________________________________
_______
"For supported releases of Oracle Database, you can downgrade a database to the release from which you last upgraded. For example, if you recently upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."
So, you can downgrade only if you didn't "activate" newer feature (didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, etc...), if you aren't going to use newer features.
Regards,
Igor
Not true again.

From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section "How the COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE initialization parameter set to 11.0.0, then it generates database structures on disk that are compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables the use of features. If you try to use any new features that make the database incompatible with the COMPATIBLE initialization parameter, then an error occurs. However, any new features that do not make incompatible changes on disk are enabled.

So if feature do not change disk in incompatible way this features is enable.

Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
:

"This parameter specifies the release with which Oracle must maintain compatibility. It enables you to take advantage of the maintenance improvements of a new release immediately in your production systems without testing the new functionality in your environment. Some features of the release may be restricted."

So this parameter is made with rollback in mind and you have some new feature from newer software release.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

___________________________________________________________________________________________________________________

^^^
Which means that you can make use of some new feature, but definitely not all.
That makes "downgrade" feature very, very limited, if useful at all.

Regards,
Igor
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsq
Geoff Winkless
2016-07-28 15:37:12 UTC
Permalink
Post by Igor Neyman
Which means that you can make use of some new feature, but definitely not
all.
That makes "downgrade" feature very, very limited, if useful at all.
​
Sufficient to allow you to run the upgrade, find that there's a
catastrophic bug in the new version that causes your live system to fall
over but
​which ​
didn't appear on your test system, and then revert to a working version?

I'd say that's a fairly useful feature, limited or not.

Geoff​
Igor Neyman
2016-07-28 15:41:41 UTC
Permalink
-----Original Message-----
From: Alex Ignatov [mailto:***@postgrespro.ru]
Sent: Thursday, July 28, 2016 11:26 AM
To: Igor Neyman <***@perceptron.com>; Rakesh Kumar <***@gmail.com>
Cc: PostgreSQL General <pgsql-***@postgresql.org>
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 10:59 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm
#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -
https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007
About compatible params you are right but downgrade is possible from 12c.
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company
______________________________________________________________________
_______
"For supported releases of Oracle Database, you can downgrade a database to the release from which you last upgraded. For example, if you recently upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."
So, you can downgrade only if you didn't "activate" newer feature (didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, etc...), if you aren't going to use newer features.
Regards,
Igor
Not true again.

From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section "How the COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE initialization parameter set to 11.0.0, then it generates database structures on disk that are compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables the use of features. If you try to use any new features that make the database incompatible with the COMPATIBLE initialization parameter, then an error occurs. However, any new features that do not make incompatible changes on disk are enabled.

So if feature do not change disk in incompatible way this features is enable.

Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
:

"This parameter specifies the release with which Oracle must maintain compatibility. It enables you to take advantage of the maintenance improvements of a new release immediately in your production systems without testing the new functionality in your environment. Some features of the release may be restricted."

So this parameter is made with rollback in mind and you have some new feature from newer software release.

Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company

____________________________________________________________________________________________________________________

B.t.w., In Postgres release which doesn't change "database structures" is called "minor" (just replacing binaries), and allows downgrade.

"Compactable" or not, in "Oracle world" no DBA in their right mind will attempt major upgrade without having complete (and tested) backup that could be used in case of upgrade going wrong.

Regards,
Igor
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-g
Chris Travers
2016-07-28 16:07:27 UTC
Permalink
Post by Scott Mead
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome
that
Post by Geoff Winkless
Post by Scott Mead
this class of bug can happen at all. A new version of Postgres could be
released at any time that has a bug of this nature, and because of the
way
Post by Geoff Winkless
Post by Scott Mead
replication works, this issue has the potential to spread into all of
the
Post by Geoff Winkless
Post by Scott Mead
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump truck.
Hopefully
Post by Geoff Winkless
Post by Scott Mead
they vectored somewhere in the middle and got themselves a nice sandbox.
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted
in
Post by Geoff Winkless
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring
the
Post by Geoff Winkless
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.
The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.
As for MySQL issues, personally I love the fact that a single query
inserting a bunch of rows can sometimes deadlock against itself. And I
love the fact that this is obliquely documented as expected behavior. May
I mention I am *really glad* PostgreSQL doesn't go the whole multi-threaded
backend route and that this is exhibit A as to why (I am sure it is a
thread race issue between index and table updates)?
Post by Scott Mead
As someone who has gotten more than one bug fix from pgsql in less
than 48 hours, I feel sorry for anyone who finds a bug in a MySQL
version they are running in production.
--
http://www.postgresql.org/mailpref/pgsql-general
--
Best Wishes,
Chris Travers

Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more
Alex Ignatov
2016-07-28 16:32:39 UTC
Permalink
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 11:26 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 10:59 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Rakesh Kumar
Post by Alex Ignatov
Sorry, what? You can rollback to previous version of software what you had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm
#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -
https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007
About compatible params you are right but downgrade is possible from 12c.
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company
______________________________________________________________________
_______
"For supported releases of Oracle Database, you can downgrade a database to the release from which you last upgraded. For example, if you recently upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not change the compatible initialization parameter to 12.1 or higher, then you can downgrade to release 11.2.0.4."
So, you can downgrade only if you didn't "activate" newer feature (didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug fixes, etc...), if you aren't going to use newer features.
Regards,
Igor
Not true again.
From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section "How the COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE initialization parameter set to 11.0.0, then it generates database structures on disk that are compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables the use of features. If you try to use any new features that make the database incompatible with the COMPATIBLE initialization parameter, then an error occurs. However, any new features that do not make incompatible changes on disk are enabled.
So if feature do not change disk in incompatible way this features is enable.
Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
"This parameter specifies the release with which Oracle must maintain compatibility. It enables you to take advantage of the maintenance improvements of a new release immediately in your production systems without testing the new functionality in your environment. Some features of the release may be restricted."
So this parameter is made with rollback in mind and you have some new feature from newer software release.
Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company
____________________________________________________________________________________________________________________
B.t.w., In Postgres release which doesn't change "database structures" is called "minor" (just replacing binaries), and allows downgrade.
"Compactable" or not, in "Oracle world" no DBA in their right mind will attempt major upgrade without having complete (and tested) backup that could be used in case of upgrade going wrong.
Regards,
Igor
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their
right to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and
sun is shining.


Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Chris Travers
2016-07-28 16:36:19 UTC
Permalink
Post by Alex Ignatov
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 11:26 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
Post by Igor Neyman
-----Original Message-----
Sent: Thursday, July 28, 2016 10:59 AM
Subject: Re: [GENERAL] Uber migrated from Postgres to MySQL
On Thu, Jul 28, 2016 at 10:38 AM, Alex Ignatov <
Sorry, what? You can rollback to previous version of software what you
Post by Alex Ignatov
had.
https://docs.oracle.com/cd/E11882_01/server.112/e23633/downgrade.htm
#
UPGRD007
Not so fast. This requires the db to be set in compatibility mode of
the lower version. Once that is removed, no downgrade is possible.
Also starting from Oracle 12 this downgrade option has been removed.
That is, you can't upgrade in-place from Or 11 to 12 and then
downgrade back. The only supported way is the restore from an old
backup (which is true for all products). I don't work in Oracle, but
got this clarified by someone who is an Oracle dba.
Not true -
https://docs.oracle.com/database/121/UPGRD/downgrade.htm#UPGRD007
About compatible params you are right but downgrade is possible from 12c.
Alex Ignatov
Postgres Professional: http://www.postgrespro.com The Russian Postgres
Company
______________________________________________________________________
_______
"For supported releases of Oracle Database, you can downgrade a database
to the release from which you last upgraded. For example, if you recently
upgraded from release 11.2.0.4 to Oracle Database 12c, and you did not
change the compatible initialization parameter to 12.1 or higher, then you
can downgrade to release 11.2.0.4."
So, you can downgrade only if you didn't "activate" newer feature
(didn't compatibility to higher version).
But then, what's the point in upgrading (I know, there are some like bug
fixes, etc...), if you aren't going to use newer features.
Regards,
Igor
Not true again.
From Oracle docs
https://docs.oracle.com/database/121/UPGRD/intro.htm#UPGRD001 Section
"How the COMPATIBLE Initialization Parameter Operates in Oracle Database"
-
If you run an Oracle Database 12c database with the COMPATIBLE
initialization parameter set to 11.0.0, then it generates database
structures on disk that are compatible with Oracle Database 11g.
Therefore, the COMPATIBLE initialization parameter enables or disables
the use of features. If you try to use any new features that make the
database incompatible with the COMPATIBLE initialization parameter, then an
error occurs. However, any new features that do not make incompatible
changes on disk are enabled.
So if feature do not change disk in incompatible way this features is
enable.
Also from this
https://docs.oracle.com/database/121/REFRN/GUID-6C57EE11-BD06-4BB8-A0F7-D6CDDD086FA9.htm#REFRN10019
"This parameter specifies the release with which Oracle must maintain
compatibility. It enables you to take advantage of the maintenance
improvements of a new release immediately in your production systems
without testing the new functionality in your environment. Some features of
the release may be restricted."
So this parameter is made with rollback in mind and you have some new
feature from newer software release.
Alex Ignatov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company
____________________________________________________________________________________________________________________
B.t.w., In Postgres release which doesn't change "database structures"
is called "minor" (just replacing binaries), and allows downgrade.
"Compactable" or not, in "Oracle world" no DBA in their right mind will
attempt major upgrade without having complete (and tested) backup that
could be used in case of upgrade going wrong.
Regards,
Igor
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their
right to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and sun
is shining.
And see, I am just wondering, what DBA in their right mind would allow a db
to run without a complete and tested backup. I mean when you upgrade the
last thing you do is take another backup and the first thing you do after
upgrading is take another backup, right? Or am I just being dense?
Scott Marlowe
2016-07-28 16:43:02 UTC
Permalink
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their right
to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and sun is
shining.
Of course we have backups. But we also have slony. So we CAN go back
and forth between latest and previous without a restore.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Alex Ignatov
2016-07-28 17:23:31 UTC
Permalink
Post by Scott Marlowe
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their right
to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and sun is
shining.
Of course we have backups. But we also have slony. So we CAN go back
and forth between latest and previous without a restore.
And? Oracle and MySql doesnt have it but can downgrade right out the
box. Quick and easy.

Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Rakesh Kumar
2016-07-28 17:35:45 UTC
Permalink
And? Oracle and MySql doesnt have it but can downgrade right out the box.
Quick and easy.
No it is not for mysql.

http://dev.mysql.com/doc/refman/5.7/en/downgrading.html

"In-place Downgrade: Involves shutting down the new MySQL version,
replacing the new MySQL binaries or packages with the old ones, and
restarting the old MySQL version on the existing data directory.
In-place downgrades are supported for downgrades between GA versions
within the same release series. For example, in-place downgrades are
supported for downgrades from 5.7.10 to 5.7.9.
The above is easy and same as PG if you are going from 9.5.2 to 9.5.1.
"
Now let us take about major downgrade.

Downgrading one release level is supported using the logical downgrade
method. For example, downgrading from 5.7 to 5.6 is supported. Logical
Downgrade: Involves using mysqldump to dump all tables from the new
MySQL version, and then loading the dump file into the old MySQL
version. Logical downgrades are supported for downgrades between GA
versions within the same release series and for downgrades between
release levels. For example, logical downgrades are supported for
downgrades from 5.7.10 to 5.7.9 and for downgrades from 5.7 to 5.6.

Only a fool will consider this as simple. And BTW all products support
methods similar to mysqldump. What we are looking at, is an in-place
downgrade after a major version upgrade and that looks to be almost
non-existent.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Scott Marlowe
2016-07-28 17:43:38 UTC
Permalink
Post by Scott Marlowe
Post by Alex Ignatov
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their
right
to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and sun
is
shining.
Of course we have backups. But we also have slony. So we CAN go back
and forth between latest and previous without a restore.
And? Oracle and MySql doesnt have it but can downgrade right out the box.
Quick and easy.
So you can swap between oracle 11 and 12 back and forth in a live
environment with no downtime? Please pull the other leg.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Joshua D. Drake
2016-07-28 17:54:01 UTC
Permalink
Post by Scott Marlowe
And? Oracle and MySql doesnt have it but can downgrade right out the box.
Quick and easy.
So you can swap between oracle 11 and 12 back and forth in a live
environment with no downtime? Please pull the other leg.
This conversation seems to be degrading, can we keep it productive please?

JD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Edson Richter
2016-07-28 18:09:41 UTC
Permalink
On Thu, Jul 28, 2016 at 3:38 PM, Scott Marlowe
On Wed, Jul 27, 2016 at 9:51 AM, Geoff Winkless
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of
Postgres 9.2 and
Post by Geoff Winkless
Post by Scott Mead
has been fixed for a long time now. However, we still find it
worrisome that
Post by Geoff Winkless
Post by Scott Mead
this class of bug can happen at all. A new version of Postgres
could be
Post by Geoff Winkless
Post by Scott Mead
released at any time that has a bug of this nature, and because
of the way
Post by Geoff Winkless
Post by Scott Mead
replication works, this issue has the potential to spread into
all of the
Post by Geoff Winkless
Post by Scott Mead
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump
truck. Hopefully
Post by Geoff Winkless
Post by Scott Mead
they vectored somewhere in the middle and got themselves a nice
sandbox.
Post by Geoff Winkless
At least his bug got fixed. The last 2 bugs I reported to MySQL
resulted in
Post by Geoff Winkless
an initial refusal to accept any problem existed, followed by
(once that
Post by Geoff Winkless
particular strategy had run out of steam) the developer simply
ignoring the
Post by Geoff Winkless
bug until it was closed automatically by their bug system. As
far as I'm
Post by Geoff Winkless
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.
The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.
As for MySQL issues, personally I love the fact that a single query
inserting a bunch of rows can sometimes deadlock against itself. And
I love the fact that this is obliquely documented as expected
behavior. May I mention I am *really glad* PostgreSQL doesn't go the
whole multi-threaded backend route and that this is exhibit A as to
why (I am sure it is a thread race issue between index and table
updates)?
Sorry, I think this is a biased vision. Multi-threading will show as
much problems as multi-process - both has to have simultaneous access
(or, at least, right semaphor implementation to serialize writes and
syncronize reads).
The fact is **on this point at least** is that Postgres is correctly
implemented, and MySQL is faulty.
I've faced the "lost FK integrity hell" (caused by the problem above)
with MySQL long before decided to migrate all systems to PostgreSQL.
My personal experience is that MySQL is excellent for data that is not
sensitive (web site, e-mail settings, etc). Everything else goes to
PostgreSQL (or Oracle, or MS SQL Server, or Sybase, or DB2 - in *my*
order of preference).


Regards,

Edson Richter
D'Arcy J.M. Cain
2016-07-28 19:30:39 UTC
Permalink
On Thu, 28 Jul 2016 19:32:39 +0300
Post by Alex Ignatov
Post by Igor Neyman
"Compactable" or not, in "Oracle world" no DBA in their right mind
will attempt major upgrade without having complete (and tested)
backup that could be used in case of upgrade going wrong.
Oh, so in contrast to "Oracle world" "Postgres world" DBA in their
right to do major upgrade without complete and tested backup?
Ok, I understand you. In Postgres world there always sky is blue and
sun is shining.
Yikes! Where did you read that?
--
D'Arcy J.M. Cain <***@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner.
IM: ***@Vex.Net, VoIP: sip:***@druid.net
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-28 22:16:48 UTC
Permalink
That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems. Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects. If you only have thousands of objects, then as
far as I know -k works like a charm.
millions of tables?
Well, it was a problem at much smaller values, until we fixed many of
them. But the perversity is, if you are stuck on a version before the
fixes, the problems prevent you from getting to a version on which it
is not a problem any more.
Uh, that is only true if the slowness was in _dumping_ many objects.
Most of the fixes have been for _restoring_ many objects, and that is
done in the new cluster, so they should be OK.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
John R Pierce
2016-07-28 22:26:17 UTC
Permalink
That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems. Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects. If you only have thousands of objects, then as
far as I know -k works like a charm.
millions of tables?
Well, it was a problem at much smaller values, until we fixed many of
them. But the perversity is, if you are stuck on a version before the
fixes, the problems prevent you from getting to a version on which it
is not a problem any more.
Uh, that is only true if the slowness was in_dumping_ many objects.
Most of the fixes have been for_restoring_ many objects, and that is
done in the new cluster, so they should be OK.
I thought we were talking about pg_upgrade in -k link mode? or does
that rely on a dump/restore --schema-only operation to create the metadata?
--
john r pierce, recycling bits in santa cruz
Bruce Momjian
2016-07-28 22:53:58 UTC
Permalink
Post by Bruce Momjian
Uh, that is only true if the slowness was in _dumping_ many objects.
Most of the fixes have been for _restoring_ many objects, and that is
done in the new cluster, so they should be OK.
I thought we were talking about pg_upgrade in -k link mode?    or does that
rely on a dump/restore --schema-only operation to create the metadata?
Yes, it does, with our without -k --- -k only controls file link vs file
copy.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Joe Conway
2016-07-28 23:58:16 UTC
Permalink
Post by Bruce Momjian
That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems. Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects. If you only have thousands of objects, then as
far as I know -k works like a charm.
millions of tables?
Well, it was a problem at much smaller values, until we fixed many of
them. But the perversity is, if you are stuck on a version before the
fixes, the problems prevent you from getting to a version on which it
is not a problem any more.
Uh, that is only true if the slowness was in _dumping_ many objects.
Most of the fixes have been for _restoring_ many objects, and that is
done in the new cluster, so they should be OK.
Not really true. I ran into two separate cases where on older (pre 9.3 I
believe) Postgres if you had hundreds of thousands of tables (in the
case I remember well, it was about 500k tables) the schema dump from the
old cluster basically never finished (ok, was killed after about a
week). I had to find the patch that fixed a good bit of the slowness and
backport it to the older version so we could successfully run pg_upgrade
(in something like 14 hours instead of 7+ days).

Joe
--
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development
Joshua D. Drake
2016-07-29 00:23:47 UTC
Permalink
Post by Joe Conway
Not really true. I ran into two separate cases where on older (pre 9.3 I
believe) Postgres if you had hundreds of thousands of tables (in the
case I remember well, it was about 500k tables) the schema dump from the
old cluster basically never finished (ok, was killed after about a
week). I had to find the patch that fixed a good bit of the slowness and
backport it to the older version so we could successfully run pg_upgrade
(in something like 14 hours instead of 7+ days).
Correct, I don't know if it is still true but definitely pre 9.3, if you
had lots and lots of tables, you were looking at very long times to
actually start a dump. The thing is, although 500k tables is very rare,
10k tables isn't nearly as rare. That would still take entirely too long.

Sincerely,

jD
--
Command Prompt, Inc. http://the.postgres.company/
+1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.
Unless otherwise stated, opinions are my own.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Chris Travers
2016-07-29 03:01:18 UTC
Permalink
Post by Chris Travers
Post by Scott Mead
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2
and
Post by Geoff Winkless
Post by Scott Mead
has been fixed for a long time now. However, we still find it
worrisome that
Post by Geoff Winkless
Post by Scott Mead
this class of bug can happen at all. A new version of Postgres could be
released at any time that has a bug of this nature, and because of the
way
Post by Geoff Winkless
Post by Scott Mead
replication works, this issue has the potential to spread into all of
the
Post by Geoff Winkless
Post by Scott Mead
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump truck.
Hopefully
Post by Geoff Winkless
Post by Scott Mead
they vectored somewhere in the middle and got themselves a nice
sandbox.
Post by Geoff Winkless
At least his bug got fixed. The last 2 bugs I reported to MySQL
resulted in
Post by Geoff Winkless
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring
the
Post by Geoff Winkless
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.
The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.
As for MySQL issues, personally I love the fact that a single query
inserting a bunch of rows can sometimes deadlock against itself. And I
love the fact that this is obliquely documented as expected behavior. May
I mention I am *really glad* PostgreSQL doesn't go the whole multi-threaded
backend route and that this is exhibit A as to why (I am sure it is a
thread race issue between index and table updates)?
Sorry, I think this is a biased vision. Multi-threading will show as much
problems as multi-process - both has to have simultaneous access (or, at
least, right semaphor implementation to serialize writes and syncronize
reads).
The fact is **on this point at least** is that Postgres is correctly
implemented, and MySQL is faulty.
I've faced the "lost FK integrity hell" (caused by the problem above) with
MySQL long before decided to migrate all systems to PostgreSQL.
Both have potential for the same sorts of problems to be sure, but the
difference is that process isolation does give you some protection and
helps contain the issues. Not the only way to do so, to be sure. But
getting some extra help from the OS and CPU in this area really is not a
bad thing. My point is that I like the design decision here regarding
PostgreSQL.
Post by Chris Travers
My personal experience is that MySQL is excellent for data that is not
sensitive (web site, e-mail settings, etc). Everything else goes to
PostgreSQL (or Oracle, or MS SQL Server, or Sybase, or DB2 - in *my* order
of preference).
MySQL is a good system for a few sorts of problems. InnoDB is
well-optimized for simple primary key lookups. So if really what you want
is a SQL access to a dumb information store with network access and
reasonable write scalability, it is not a bad system. You will note that
is actually pretty close to Uber's use case with schemaless. But at that
point, foreign keys are pretty close to being optional.....
Post by Chris Travers
Regards,
Edson Richter
--
Best Wishes,
Chris Travers

Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more
Achilleas Mantzios
2016-07-29 06:43:42 UTC
Permalink
Post by Scott Marlowe
Post by Geoff Winkless
Post by Scott Mead
"The bug we ran into only affected certain releases of Postgres 9.2 and
has been fixed for a long time now. However, we still find it worrisome that
this class of bug can happen at all. A new version of Postgres could be
released at any time that has a bug of this nature, and because of the way
replication works, this issue has the potential to spread into all of the
databases in a replication hierarchy."
ISTM that they needed a tire swing and were using a dump truck. Hopefully
they vectored somewhere in the middle and got themselves a nice sandbox.
At least his bug got fixed. The last 2 bugs I reported to MySQL resulted in
an initial refusal to accept any problem existed, followed by (once that
particular strategy had run out of steam) the developer simply ignoring the
bug until it was closed automatically by their bug system. As far as I'm
aware those bugs still exist in the most recent version.
Best / worst MySQL bug was one introduced and fixed twice. Someone put
in a short cut that sped up order by by quite a bit. It also meant
that order by desc would actually get order by asc output. It was
inserted into the code due to poor oversite / code review practices,
then fixed about 9 months later, then introduced again, and again,
took about a year to fix.
The fact that it was introduced into a General Release mid stream with
no testing or real reviews speaks volumes about MySQL and its
developers. The fact that it took months to years to fix each time
does as well.
As for MySQL issues, personally I love the fact that a single query inserting a bunch of rows can sometimes deadlock against itself. And I love the fact that this is obliquely documented as
expected behavior. May I mention I am *really glad* PostgreSQL doesn't go the whole multi-threaded backend route and that this is exhibit A as to why (I am sure it is a thread race issue between
index and table updates)?
Sorry, I think this is a biased vision. Multi-threading will show as much problems as multi-process - both has to have simultaneous access (or, at least, right semaphor implementation to serialize
writes and syncronize reads).
The fact is **on this point at least** is that Postgres is correctly implemented, and MySQL is faulty.
I've faced the "lost FK integrity hell" (caused by the problem above) with MySQL long before decided to migrate all systems to PostgreSQL.
My personal experience is that MySQL is excellent for data that is not sensitive (web site, e-mail settings, etc). Everything else goes to PostgreSQL (or Oracle, or MS SQL Server, or Sybase, or DB2
- in *my* order of preference).
+1
Regards,
Edson Richter
--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt
Bruce Momjian
2016-07-29 16:58:57 UTC
Permalink
Post by Bruce Momjian
https://vimeo.com/145842299
You can see the failover happened because of various user errors. That
doesn't excuse our bug, but I am not sure exactly how much they
understood of Postgres behavior. His talk near the end about the
replication infrastucture being exposed to them was also interesting.
Here is a more balanced blog post that corrects some missing
information, e.g. HOT updates, hot_standby_feedback:

http://use-the-index-luke.com/blog/2016-07-29/on-ubers-choice-of-databases
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jerry Sievers
2016-07-29 17:33:12 UTC
Permalink
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days
pg_upgrade to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary
and library, need to do full dump and restore that take time and disk
space.
Yeah, very hard indeed when done by unskilled DBAs :-)

I've done several ~7TB pg_upgrades and with the hard link option and a
framework that parallelizes the post-analyzer phase...

...45 minutes till completion.
Post by Condor
Regards,
Hristo S.
--
Jerry Sievers
Postgres DBA/Development Consulting
e: ***@comcast.net
p: 312.241.7800
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Maeldron T.
2016-07-29 17:49:36 UTC
Permalink
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
1. Open the page

2. Press Cmd-F on Mac, Ctrl-F on Linux/Windows. (Find on page)

3. Type "transaction" in the search field, without the quotes

4. Scroll through the article and look for the highlights


There are 17 highlights. I count 2 in the MySQL part. Chances are good
that the rest 15 are in the PostgreSQL part.

It tells a lot.

When I am told that MySQL supports transactions I face the fact that the
word "transaction" must have at least two meanings and no matter what I
do I know only one of them.

Every time I had to work with MySQL I felt overwhelming sloppiness. I
can’t (or don’t want to) deal with accepting 0 as NULL but only once,
auto typecasting pianos to cats (take it as a metaphor), committing a
"transaction" (in MySQL terms), without sending commit, on client
disconnect. (Older version).

One can say it can’t be that bad as Facebook and now Uber are using it.
The same logic tells that junk food is the best for humans.

In the last few years I tried out more or less every hyped schemaless
databases. Not for their main feature as my data like like the rest of
the data in the Universe can be put in a schema. I did it because faced
some of the issues mentioned in the article and other issues that aren’t
mentioned, even on smaller scale, that’s why. (Smaller scale means
smaller company, less resources, less people. At the end it hurts the
same way.)

I still don’t see how I could live without transactions, and not only
because a simple and intentional rollback saved me from much coding and
complexity with one the most important features of my application. But
having a single update statement modified about the 70% of the records
before Cassandra crashed is not for me. It tried to repair about 1000
records, using 3 nodes, for 1 or 2 hours before I deleted the test cluster.

Maybe I did it wrong. Or probably. It can’t be that bad. People at Uber
probably know more about the internals than I ever will. I also know
that a few big companies had about 1 day long downtimes thanx to MongoDB
and CouchDB.

Since I know that people who are way more professional than me decide to
use a database engine in production that doesn’t tell you whether it
could store your data or not, I don’t care who is doing what and what is
on his business card.


And yes, I hate upgrading PostgreSQL especially on FreeBSD where
pg_upgrade isn’t really an option.

Sometimes the answer is manual or full vacuum, no matter what the manual
says. (And yes, the downtime does hurt.)


On the other hand, if there was a stable and officially integrated
logical replication that supports multi-master setups, many of the
issues would just be gone. Upgrades wouldn’t be painful anymore,
timeline and pg_rewind bugs wouldn’t matter, and the DBA could remove
the bloat form the masters one by one by doing that thing in the night
when no one sees it. (Until the full-vacuum-police would find him and come.)

M.
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-29 17:59:46 UTC
Permalink
And yes, I hate upgrading PostgreSQL especially on FreeBSD where pg_upgrade
isn’t really an option.
Is that because it is hard to install the old and new clusters on the
same server on FreeBSD?
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Larry Rosenman
2016-07-29 18:06:04 UTC
Permalink
Post by Bruce Momjian
Post by Maeldron T.
And yes, I hate upgrading PostgreSQL especially on FreeBSD where
pg_upgrade
isn’t really an option.
Is that because it is hard to install the old and new clusters on the
same server on FreeBSD?
The current FreeBSD Ports collection ports only allow ONE version to be
installed at a time.
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: ***@lerctr.org
US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jerry Sievers
2016-07-29 18:17:39 UTC
Permalink
Post by Bruce Momjian
That being said, it doesn't really provide a back-out plan.  The beauty of
replication is that you can halt the upgrade at any point if need be and cut
your (hopefully small) losses. If you use -k, you are all in.  Sure, you could
setup a new standby, stop traffic, upgrade whichever node you'd like (using -k)
and still have the other ready in the event of total catastrophe.  More often
than not, I see DBAs and sysads lead the conversation with "well, postgres
can't replicate from one version to another, so instead.... " followed by a
fast-glazing of management's eyes and a desire to buy a 'commercial database'. 
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
Hmmm, is it at least theoretically possible that if a newly upgraded
system were run for an interval where *no* incompatible changes to DDL
etc had been done...

...that a downgrade could be performed?

Er, using a not yet invented pg_downgrade:-)

I reason that the same kind of voodoo that lets us do those very quick
hard linked upgrades could be used to revert as well without data loss.

Such a feature would be part of whatever newer version that the upgrade
was done to in the first place.

That is, since higher version knew enough about lower version to
rejigger everything... just maybe it could do the reverse.

Had the new version been run for very long with substantial data
changes, then a post-analyze on the downgraded system might be necessary
as well but possibly even this could be omitted in some cases.

Totally nuts? Yes, perhaps :-)

FWIW
Post by Bruce Momjian
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Jerry Sievers
Postgres DBA/Development Consulting
e: ***@comcast.net
p: 312.241.7800
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Stephen Frost
2016-07-29 18:50:32 UTC
Permalink
Jerry,
Post by Jerry Sievers
Post by Bruce Momjian
I agree, but I am not sure how to improve it. The big complaint I have
heard is that once you upgrade and open up writes on the upgraded
server, you can't re-apply those writes to the old server if you need to
fall back to the old server. I also don't see how to improve that either.
Hmmm, is it at least theoretically possible that if a newly upgraded
system were run for an interval where *no* incompatible changes to DDL
etc had been done...
...that a downgrade could be performed?
Er, using a not yet invented pg_downgrade:-)
The short answer is 'no'. Consider a case like the GIN page changes- as
soon as you execute DML on a column that has a GIN index on it, we're
going to rewrite that page using a newer version of the page format and
an older version of PG isn't going to understand it.

Those kind of on-disk changes are, I suspect, why you have to set the
"compatibility" option in the big $O product to be able to later do a
downgrade.
Post by Jerry Sievers
That is, since higher version knew enough about lower version to
rejigger everything... just maybe it could do the reverse.
That might work if you opened the database in read-only mode, but not
once you start making changes.

Thanks!

Stephen
D'Arcy J.M. Cain
2016-07-29 19:03:46 UTC
Permalink
On Fri, 29 Jul 2016 13:06:04 -0500
Post by Larry Rosenman
Post by Bruce Momjian
Is that because it is hard to install the old and new clusters on
the same server on FreeBSD?
The current FreeBSD Ports collection ports only allow ONE version to
be installed at a time.
As does NetBSD. The problem is that unlike Python (which BSD allows
multiple versions) there is only one executable to deal with. It's not
an insurmountable problem but it could get messy.

The answer is either chroot or mount and run pg_upgrade on another
server. If you can afford the downtime you can also delete PG, install
the new version and run pg_upgrade without modifying the existing DB.
If it succeeds then replace the directories and restart the new
version. If it fails then uninstall PG, reinstall the older version
and restart. Lather, rinse, repeat until it upgrades cleanly.
--
D'Arcy J.M. Cain <***@druid.net> | Democracy is three wolves
http://www.druid.net/darcy/ | and a sheep voting on
+1 416 788 2246 (DoD#0082) (eNTP) | what's for dinner.
IM: ***@Vex.Net, VoIP: sip:***@druid.net
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-29 19:07:53 UTC
Permalink
Post by D'Arcy J.M. Cain
As does NetBSD. The problem is that unlike Python (which BSD allows
multiple versions) there is only one executable to deal with. It's not
an insurmountable problem but it could get messy.
The answer is either chroot or mount and run pg_upgrade on another
server. If you can afford the downtime you can also delete PG, install
the new version and run pg_upgrade without modifying the existing DB.
If it succeeds then replace the directories and restart the new
version. If it fails then uninstall PG, reinstall the older version
and restart. Lather, rinse, repeat until it upgrades cleanly.
pg_upgrade needs to run the old and new server binaries as part of its
operation, so that would not work.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-07-29 19:09:38 UTC
Permalink
Post by Stephen Frost
Post by Jerry Sievers
Er, using a not yet invented pg_downgrade:-)
The short answer is 'no'. Consider a case like the GIN page changes- as
soon as you execute DML on a column that has a GIN index on it, we're
going to rewrite that page using a newer version of the page format and
an older version of PG isn't going to understand it.
Those kind of on-disk changes are, I suspect, why you have to set the
"compatibility" option in the big $O product to be able to later do a
downgrade.
Yes, you would need a mode that prevented new-format writes on the new
server.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Condor
2016-07-29 20:18:54 UTC
Permalink
Post by Jerry Sievers
Post by Condor
Post by Dorian Hoxha
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
Post by Guyren Howe
Honestly, I've never heard of anyone doing that. But it sounds like
they had good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
--
http://www.postgresql.org/mailpref/pgsql-general
They are right for upgrades.
It's a hard to shutdown 1 TB database and wait couple of days
pg_upgrade to finish upgrade and meanwhile database is offline.
In some distros after upgrade of PG version you don't have old binary
and library, need to do full dump and restore that take time and disk
space.
Yeah, very hard indeed when done by unskilled DBAs :-)
I've done several ~7TB pg_upgrades and with the hard link option and a
framework that parallelizes the post-analyzer phase...
...45 minutes till completion.
Post by Condor
Regards,
Hristo S.
GL to you
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Kevin Grittner
2016-07-29 20:32:26 UTC
Permalink
Post by Condor
Post by Jerry Sievers
I've done several ~7TB pg_upgrades and with the hard link option and a
framework that parallelizes the post-analyzer phase...
...45 minutes till completion.
GL to you
Luck has nothing to do with anything. You really might want to
post with more details and see whether people can help sort out why
you have seen such slow performance where so many others have not.
I hope it's not just a matter of saying "I have a 1TB database and
upgrade is slow, therefore it is slow because it is 1TB." That
would be roughly the equivalent of saying "I have a blue car and it
is slow, therefore it is slow because it is blue." It just might
be the flat tire that actually matters. If your upgrade is slow
because you have 10 million database objects, that might be a hard
one to overcome, but it might be something with an easy solution in
the pg_upgrade options or server configuration.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Jason Dusek
2016-07-31 02:47:17 UTC
Permalink
Post by Jason Dusek
Post by Jason Dusek
With regards to write amplification, it makes me think about about
OIDs. Used to be, every row had an OID and that OID persisted across
row versions.
https://www.postgresql.org/docs/9.5/static/runtime-config-compatible.html#GUC-DEFAULT-WITH-OIDS
Post by Jason Dusek
Would reintroducing such a feature address some of Uber's concerns
about multiple indexes? It could, and would do so without the implicit
requirement of a foreign key; but it would also require a fast OID to
CTID mapping.
Would it be best to increase OIDs to 64 bits?
Possibly a choice of 32/64 to be decided when the DB is created???
Moving to 64 bit OIDs would make the OID->CTID mapping take more memory and
could, consequently, make it slower; but OIDs would seem to be required to
support temporal tables so maybe they should make a comeback?

Kind Regards,
Jason Dusek
Jeff Janes
2016-08-01 18:54:00 UTC
Permalink
Post by Bruce Momjian
That depends on how how many objects there are consuming that 1 TB.
With millions of small objects, you will have problems. Not as many
in 9.5 as there were in 9.1, but still it does not scale linearly in
the number of objects. If you only have thousands of objects, then as
far as I know -k works like a charm.
millions of tables?
Well, it was a problem at much smaller values, until we fixed many of
them. But the perversity is, if you are stuck on a version before the
fixes, the problems prevent you from getting to a version on which it
is not a problem any more.
Uh, that is only true if the slowness was in _dumping_ many objects.
Most of the fixes have been for _restoring_ many objects, and that is
done in the new cluster, so they should be OK.
There have been improvements on both sides. For the improvements that
need to exist in the old-server to be effective, we did backpatch the
main one back to 9.1, in the October 2015 releases, specifically to
help people get off the old versions. So if you are on 9.1 with
tens/hundreds of thousands of objects, you need to do a minor version
upgrade to at least 9.1.19 before doing the major version upgrade. If
you are on 9.0 or before with so many objects, you don't have a lot of
good options.


Cheers,

Jeff
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Bruce Momjian
2016-08-01 18:57:49 UTC
Permalink
Post by Jeff Janes
Post by Bruce Momjian
Uh, that is only true if the slowness was in _dumping_ many objects.
Most of the fixes have been for _restoring_ many objects, and that is
done in the new cluster, so they should be OK.
There have been improvements on both sides. For the improvements that
need to exist in the old-server to be effective, we did backpatch the
main one back to 9.1, in the October 2015 releases, specifically to
help people get off the old versions. So if you are on 9.1 with
tens/hundreds of thousands of objects, you need to do a minor version
upgrade to at least 9.1.19 before doing the major version upgrade. If
you are on 9.0 or before with so many objects, you don't have a lot of
good options.
Yeah, that's what I remember. Thanks for the details.
--
Bruce Momjian <***@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tatsuo Ishii
2016-08-05 08:16:36 UTC
Permalink
Post by Vik Fearing
Another one I think they obliquely referred to (in the subtle problems
section) was the fact that if you have longer-running queries on the
replica with a lot of updates, you can get funny auto-vacuum-induced
errors (writes from autovacuum on the master can interrupt queries on
the slave). BTW if there is interest in what could be done for that,
something which allows autovacuum to decide how long to wait before
cleaning up dead tuples would be a great enhancement.
You mean something like hot_standby_feedback?
https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
I wonder if their problem could be fixed by using
hot_standby_feedback. I have encountered similar problem but it seems
hot_standby_feedback was not any help in this case:

https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Simon Riggs
2016-08-05 10:27:00 UTC
Permalink
Post by Tatsuo Ishii
Post by Vik Fearing
Another one I think they obliquely referred to (in the subtle problems
section) was the fact that if you have longer-running queries on the
replica with a lot of updates, you can get funny auto-vacuum-induced
errors (writes from autovacuum on the master can interrupt queries on
the slave). BTW if there is interest in what could be done for that,
something which allows autovacuum to decide how long to wait before
cleaning up dead tuples would be a great enhancement.
You mean something like hot_standby_feedback?
https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
I wonder if their problem could be fixed by using
hot_standby_feedback. I have encountered similar problem but it seems
https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp
There have been various bugs and enhancements over the years, not all
of which were backpatched.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Tatsuo Ishii
2016-08-05 13:06:37 UTC
Permalink
Post by Simon Riggs
Post by Tatsuo Ishii
Post by Vik Fearing
https://www.postgresql.org/docs/current/static/runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK
I wonder if their problem could be fixed by using
hot_standby_feedback. I have encountered similar problem but it seems
https://www.postgresql.org/message-id/20130829.164457.863984798767991096.t-ishii%40sraoss.co.jp
There have been various bugs and enhancements over the years, not all
of which were backpatched.
The paticular case still does not work with PostgreSQL 9.5.3.

On primary:
create table t1(i int);
insert into t1 values(1),(2),(3);

On standby:
begin;
test=# select version();
version
--------------------------------------------------------------------------------------------------------
PostgreSQL 9.5.3 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit
(1 row)

select * from t1;
i
---
1
2
3
(3 rows)

On primary:
delete from t1;

On standby:
select * from t1;
i
---
(0 rows)

On primary:
test=# vacuum verbose t1;
INFO: vacuuming "public.t1"
INFO: "t1": removed 3 row versions in 1 pages
INFO: "t1": found 3 removable, 0 nonremovable row versions in 1 out of 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "t1": truncated 1 to 0 pages
DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM

After while on standby:
test=# select * from t1;
FATAL: terminating connection due to conflict with recovery
DETAIL: User was holding a relation lock for too long.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Simon Riggs
2016-08-05 14:24:52 UTC
Permalink
Post by Tatsuo Ishii
test=# vacuum verbose t1;
INFO: vacuuming "public.t1"
INFO: "t1": removed 3 row versions in 1 pages
INFO: "t1": found 3 removable, 0 nonremovable row versions in 1 out of 1 pages
DETAIL: 0 dead row versions cannot be removed yet.
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins.
0 pages are entirely empty.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO: "t1": truncated 1 to 0 pages
DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.
VACUUM
test=# select * from t1;
FATAL: terminating connection due to conflict with recovery
DETAIL: User was holding a relation lock for too long.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Yes, the VACUUM truncation is still an issue. But statements are
retryable, just like deadlocks.

Unfo the truncation logic always kicks in or small tables of less than
16 blocks. It's more forgiving on bigger tables.

Maybe we could defer the truncation on the standby in some cases.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Sent via pgsql-general mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Loading...