daf@daftitanium:~$ lsusb
Bus 003 Device 101: ID 534d:2109 MacroSilicon USB Video
cat /var/log/syslog

Oct 2 11:31:13 daftitanium kernel: [738079.626765] usb 3-3: new high-speed USB device number 102 using xhci_hcd
Oct 2 11:31:13 daftitanium kernel: [738079.778290] usb 3-3: New USB device found, idVendor=534d, idProduct=2109, bcdDevice=21.00
Oct 2 11:31:13 daftitanium kernel: [738079.778298] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
Oct 2 11:31:13 daftitanium kernel: [738079.778301] usb 3-3: Product: USB Video
Oct 2 11:31:13 daftitanium kernel: [738079.778303] usb 3-3: Manufacturer: MACROSILICON
Oct 2 11:31:13 daftitanium kernel: [738079.780434] usb 3-3: Found UVC 1.00 device USB Video (534d:2109)
Oct 2 11:31:13 daftitanium kernel: [738079.789530] hid-generic 0003:534D:2109.006D: hiddev5,hidraw12: USB HID v1.10 Device [MACROSILICON USB Video] on usb-0000:00:14.0-3/input4
Oct 2 11:31:13 daftitanium mtp-probe: checking bus 3, device 102: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-3"
Oct 2 11:31:13 daftitanium mtp-probe: bus: 3, device: 102 was not an MTP device
Oct 2 11:31:13 daftitanium systemd-udevd[1632161]: controlC2: Process '/usr/sbin/alsactl -E HOME=/run/alsa -E XDG_RUNTIME_DIR=/run/alsa/runtime restore 2' failed with exit code 99.
Oct 2 11:31:13 daftitanium mtp-probe: checking bus 3, device 102: "/sys/devices/pci0000:00/0000:00:14.0/usb3/3-3"
Oct 2 11:31:13 daftitanium mtp-probe: bus: 3, device: 102 was not an MTP device
Oct 2 11:31:14 daftitanium Thunar[1632166]: thunar-volman: Unsupported USB device type "usb".
Oct 2 11:31:14 daftitanium Thunar[1632194]: thunar-volman: Unsupported USB device type "snd-usb-audio".
Oct 2 11:31:14 daftitanium Thunar[1632198]: thunar-volman: Unsupported USB device type "usbhid".
Oct 2 11:31:14 daftitanium Thunar[1632203]: thunar-volman: Unsupported USB device type "snd-usb-audio".
Oct 2 11:31:14 daftitanium Thunar[1632207]: thunar-volman: Unsupported USB device type "uvcvideo".
Oct 2 11:31:14 daftitanium Thunar[1632213]: thunar-volman: Unsupported USB device type "uvcvideo"
dmesg

[726177.717306] usb 3-1.3.2.2: new full-speed USB device number 100 using xhci_hcd
[726177.821220] usb 3-1.3.2.2: New USB device found, idVendor=046d, idProduct=c52f, bcdDevice=30.00
[726177.821231] usb 3-1.3.2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[726177.821234] usb 3-1.3.2.2: Product: USB Receiver
[726177.821237] usb 3-1.3.2.2: Manufacturer: Logitech
[726177.829229] logitech-djreceiver 0003:046D:C52F.0069: hidraw9: USB HID v1.11 Mouse [Logitech USB Receiver] on usb-0000:00:14.0-1.3.2.2/input0
[726177.887265] logitech-djreceiver 0003:046D:C52F.006A: hiddev4,hidraw10: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:14.0-1.3.2.2/input1
[726177.947216] logitech-djreceiver 0003:046D:C52F.006A: device of type eQUAD step 4 DJ (0x04) connected on slot 1
[726177.955492] input: Logitech M280/320/275 as /devices/pci0000:00/0000:00:14.0/usb3/3-1/3-1.3/3-1.3.2/3-1.3.2.2/3-1.3.2.2:1.1/0003:046D:C52F.006A/0003:046D:4057.006B/input/input113
[726177.955798] logitech-hidpp-device 0003:046D:4057.006B: input,hidraw11: USB HID v1.11 Mouse [Logitech M280/320/275] on usb-0000:00:14.0-1.3.2.2/input1:1
[726185.749246] logitech-hidpp-device 0003:046D:4057.006B: HID++ 4.5 device connected.
[726997.019684] input: DAF QC (AVRCP) as /devices/virtual/input/input114
[736305.306332] usb 3-4: new high-speed USB device number 101 using xhci_hcd
[736305.457861] usb 3-4: New USB device found, idVendor=534d, idProduct=2109, bcdDevice=21.00
[736305.457869] usb 3-4: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[736305.457871] usb 3-4: Product: USB Video
[736305.457873] usb 3-4: Manufacturer: MACROSILICON
[736305.459960] usb 3-4: Found UVC 1.00 device USB Video (534d:2109)
[736305.462657] hid-generic 0003:534D:2109.006C: hiddev5,hidraw12: USB HID v1.10 Device [MACROSILICON USB Video] on usb-0000:00:14.0-4/input4
[736440.565674] usb 3-4: USB disconnect, device number 101
[738079.626765] usb 3-3: new high-speed USB device number 102 using xhci_hcd
[738079.778290] usb 3-3: New USB device found, idVendor=534d, idProduct=2109, bcdDevice=21.00
[738079.778298] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=0
[738079.778301] usb 3-3: Product: USB Video
[738079.778303] usb 3-3: Manufacturer: MACROSILICON
[738079.780434] usb 3-3: Found UVC 1.00 device USB Video (534d:2109)
[738079.789530] hid-generic 0003:534D:2109.006D: hiddev5,hidraw12: USB HID v1.10 Device [MACROSILICON USB Video] on usb-0000:00:14.0-3/input4

FROM: https://www.mjt.me.uk/posts/fixing-missing-macrosilicon-ms2109/

I brought a cheap (£10) HDMI-to-USB2 capture device based on the Macrosilicon 2109 – but I encountered a problem where it didn’t appear at /dev/video* when plugged in, despite dmesg showing some recognition of the device.

Fortunately, Nikola_F on the Arch Linux forums had already found the problem and offered a solution: The device is wrongly being treated as a USB audio device. You can confirm this by checking four new symlinks appear in ls -l /sys/bus/usb/drivers/snd-usb-audio/ when you insert the device.

To fix it, I created new udev rules (in a file like /etc/udev/rules.d/91-hdmi-to-usb-ms2109.rules ) to apply Nikola_F’s fix automatically:

# For HDMI-to-USB adaptor:
SUBSYSTEM=="usb", DRIVER=="snd-usb-audio", ATTRS{idProduct}=="2109", ATTRS{idVendor}=="534d", \
RUN+="/bin/sh -c 'echo -n $kernel > /sys/bus/usb/drivers/snd-usb-audio/unbind'"

SUBSYSTEM=="usb", DRIVER=="snd-usb-audio", ATTRS{idProduct}=="2109", ATTRS{idVendor}=="534d", \
ATTR{bInterfaceNumber}=="00", RUN+="/bin/sh -c 'sleep 1; echo -n $kernel > /sys/bus/usb/drivers/uvcvideo/bind'"

Give it a sudo udevadm control --reload-rules then unplug and replug, and it should start working.

This disables the audio function of the HDMI-to-USB converter, but I wasn’t using it anyway. More careful rule-writing could probably solve that problem.
Other hints for using the MS2109

I was able to stream video from the device at 1080p with

ffplay -f video4linux2 -framerate 50 -video_size 1920x1080 -input_format mjpeg /dev/video0

The device seemed to struggle if mjpeg wasn’t used – presumably a lack of USB bandwidth. If you want to use it with software you can’t convince to ask for mjpeg, you can create a loopback camera and use ffmpeg to read mjpeg from the capture card and output regular video to the loopback camera, like so:

sudo apt-get install ffmpeg v4l2loopback-utils

sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="LoopbackCam" exclusive_caps=1
ffmpeg -f video4linux2 -framerate 50 -video_size 1920x1080 -input_format mjpeg -i /dev/video0 -f v4l2 -pix_fmt yuv420p /dev/video10

more at: https://bbs.archlinux.org/viewtopic.php?pid=1919811#p1919811 and
https://www.diyaudio.com/community/threads/macrosilicon-2109-cheap-usb-hdmi-a-v-capture.357873/

La tecnología nos hace tener menos necesidad de merito para alcanzar los mismos resultados.

good (?) reasons to use an entity-attribute-value design

© Laurenz Albe 2021

Customers have often asked me what I think of “Entity-attribute-value” (EAV) design. So I thought it would be a good idea to lay down my opinion in writing.

What is entity-attribute-value design?

Customers have often asked me what I think of “Entity-attribute-value” (EAV) design. So I thought it would be a good idea to lay down my opinion in writing.

The idea is not to create a table for each entity in the application. Rather, you store each attribute as a separate entry in an attribute table:

CREATE TABLE objects (
objectid bigint PRIMARY KEY
/* other object-level properties */
);

CREATE TABLE attstring (
objectid bigint
REFERENCES objects ON DELETE CASCADE NOT NULL,
attname text NOT NULL,
attval text,
PRIMARY KEY (objectid, attname)
);

CREATE TABLE attint (
objectid bigint
REFERENCES objects ON DELETE CASCADE NOT NULL,
attname text NOT NULL,
attval integer,
PRIMARY KEY (objectid, attname)
);

/* more tables for other data types */

The name of the model is derived from the “att...” tables, which have the three columns: “entity ID”, “attribute name” and “value”.

There are several variations of the basic theme, among them:

  • omit the objects table
  • add additional tables that define “object types”, so that each type can only have certain attributes

Why would anybody consider an entity-attribute-value design?

The principal argument I hear in support of the EAV design is flexibility. You can create new entity types without having to create a database table. Taken to the extreme, each entity can have different attributes.

I suspect that another reason for people to consider such a data model is that they are more familiar with key-value stores than with relational databases.

Performance considerations of entity-attribute-value design

In my opinion, EAV database design is the worst possible design when it comes to performance. You will never get good database performance with such a data model.

The only use cases where EAV shines are when it is used as a key-value store.

INSERT

Inserting an entity will look like this:

INSERT INTO objects (objectid) VALUES (42);

INSERT INTO attstring (objectid, attname, attval)
VALUES (42, ‘name’, ‘myobject’);

INSERT INTO attint (objectid, attname, attval)
VALUES (42, ‘start’, 100),
(42, ‘end’, 1000);

That means that we insert four rows into three tables and have four index modifications. Also, the three statements will require three client-server round trips. You can save on the round trips by using CTEs to turn that into a single statement, or by using the new pipeline mode of libpq. Still, it will be much more expensive than inserting a single table row.

DELETE

If you use cascading delete, you could do that in a single statement:

DELETE FROM objects WHERE objectid = 42;

Still, you will end up deleting four table rows and modifying four indexes. That’s much more work than deleting a single table row.

UPDATE

A single-column update could actually be faster with the EAV design, because only one small table row is modified:

UPDATE attint
SET attval = 2000
WHERE objectid = 42 AND attname = ‘end’;

 

But if you have to modify several columns, you will need to run several UPDATE statements. That will be slower than if you only had to modify a single (albeit bigger) table row.

SELECT

Querying the attributes of an entity requires a join:

SELECT as.attval AS "name",
       ai1.attval AS "start",
       ai2.attval AS "end"
FROM objects AS o
   LEFT JOIN attstring AS as USING (objectid)
   LEFT JOIN attint AS ai1 USING (objectid)
   LEFT JOIN attint AS ai2 USING (objectid)
WHERE objectid = 42
  AND as.attname = 'name'
  AND ai1.attname = 'start'
  AND ai2.attname = 'end';

Alternatively, you could run three separate queries, one for each attribute. No matter how you do it, it will be less efficient than a single-row SELECT from a single table.

Single-column aggregates

As an example for a query that might be faster with the EAV model, consider a query that aggregates data from a single column:

SELECT sum(attval) AS total
FROM othertab
   JOIN attint USING (objectid)
WHERE othertab.col = 'x'
  AND attint.attname = 'attendants';

With a covering index on attint(objectid, attname) INCLUDE (attval), this could be quite a bit faster than aggregating a column from a wider table.

More complicated queries

After these examples, it is clear that writing more complicated queries will be a pain with the EAV design. Imagine a simple join:

SELECT e1a1.attval AS person_name,
       e1a2.attval AS person_id,
       e2a1.attval AS address_street,
       e2a2.attval AS address_city
FROM attint AS e1a2
   JOIN attstring AS e1a1
      ON e1a2.objectid = e1a1.objectid
   LEFT JOIN attint AS e2a0
      ON e1a2.attval = e2a0.attval
   LEFT JOIN attstring AS e2a1
      ON e2a0.objectid = e2a1.objectid
   LEFT JOIN attstring AS e2a2
      ON e2a0.objectid = e2a2.objectid
WHERE e1a1.attname = 'name'
  AND e1a2.attname = 'persnr'
  AND e2a0.attname = 'persnr'
  AND e2a1.attname = 'street'
  AND e2a2.attname = 'city';

If you think that this query is hard to read, I agree with you. In a normal relational data model, the same operation could look like this:

SELECT person.name AS person_name,
       persnr AS person_id
       address.street,
       address.city
FROM person
   LEFT JOIN address
      USING (persnr);

You can guess which query will perform better.

But we need an entity-attribute-value design for flexibility!

Relational data models are not famous for their flexibility. After all, that is the drive behind the NoSQL movement. However, there are good ways to deal with variable entities.

Creating tables on the fly

Nothing keeps you from running statements like CREATE TABLE and CREATE INDEX from your application. So if there is a limited number of entity types, and each type has a certain number of attributes, you can easily model that with a traditional relational model.

Certain problems remain:

  • A data model that grows on the fly may not end up being well-designed. But that’s not different in the entity-attribute-value design.
  • If the application has to create tables, it needs permission to do so. But today, when many applications create their own database tables anyway, few people will worry about that.

Creating tables on the fly will only work well if the set of attributes for each entity is well-defined. If that is not the case, we need a different approach.

Using JSON for a flexible data model

PostgreSQL has extensive JSON support that can be used to model entities with a variable number of attributes.

For that, you model the important and frequently occurring attributes as normal table columns. Then you add an additional column of type jsonb with a GIN index on it. This column contains the “rare attributes” of the entity as key-value pairs.

When using a model like this, you should take care that attributes

  • used in joins
  • on which you need a database constraint
  • that you want to use in a WHERE condition with an operator different from “=

are modeled as regular table columns.

Conclusion

Avoid entity-attribute-value designs in your relational database. EAV causes bad performance, and there are other ways to have a flexible data model in PostgreSQL.

Source: Laurenz Albe @ cybertec-postgresql.com

Suppose that you want to be completely over-the-top paranoid about making sure that when you execute some particular SQL statement on your Postgres database, you’re doing it in the safest and least risky way?

For example, suppose it’s the production database behind your successful startup’s main commercial website. If anything even causes queries to block/pause for a few minutes then people will quickly be tweeting about how they can’t place orders and it hurt both your company’s revenue and reputation.

You know that it’s really important to save regular snapshots and keep a history of important metrics. You’re writing some of your own code to capture a few specific stats like the physical size of your most important application tables or maybe the number of outstanding orders over time. Maybe you’re writing some java code that gets scheduled by quartz, or maybe some python code that you’ll run with cron.

Or another situation might be that you’re planning to make an update to your schema – adding a new table, adding a new column to an existing table, modifying a constraint, etc. You plan to execute this change as an online operation during the weekly period of lowest activity on the system – maybe it’s very late Monday night, if you’re in an industry that’s busiest over weekends.

How can you make sure that your SQL is executed on the database in the safest possible way?

Here are a few ideas I’ve come up with:

  • Setting connect_timeout to something short, for example 2 seconds.
  • Setting lock_timeout to something appropriate. For example, 2ms on queries that shouldn’t be doing any locking. (I’ve seen entire systems brown-out because a “quick” DDL had to get in line behind an app transaction, and then all the new app transactions piled up behind the DDL that was waiting!)
  • Setting statement_timeout to something reasonable for the query you’re running – thus putting an upper bound on execution time.
  • Using an appropriate client-side timeout, for cases when the server fails to kill the query using statement_timeout. For example, in Java the Statement class has native support for this.
  • When writing SQL, fully qualify names of tables and functions with the schema/namespace. (This can be a security feature; I have heard of attacks where someone manages to change the search_path for connections.)
  • Check at least one explain plan and make sure it’s doing what you would expect it to be doing, and that it seems likely to be the most efficient way to get the information you need.
  • Don’t use system views that join in unneeded data sources; go direct to needed raw relation or a raw function.
  • Access each data source exactly once, never more than once. In that single pass, get all data that will be needed. Analytic or window functions are very useful for avoiding self-joins.
  • Restrict the user to minimum needed privileges. For example, the pg_read_all_stats role on an otherwise unprivileged user might be useful.
  • Make sure your code has back-off logic for when failures or unexpected results are encountered.
  • Prevent connection pile-ups resulting from database slowdowns or hangs. For example, by using a dedicated client-side connection pool with dynamic sizing entirely disabled or with a small max pool size.
  • Run the query against a physical replica/hot standby (e.g. pulling a metric for the physical size of important tables) or logical copy (e.g. any query against application data), instead of running the query against the primary production database. (However, note that when hot_standby_feedback is enabled, long-running transactions on the PostgreSQL hot standby can still impact the primary system.)
  • For all DDL, carefully check the level of locking that it will require and test to get a feel for possible execution time. Watch out for table rewrites. Many DDLs that used to require a rewrite no longer do in current versions of PostgreSQL, but there are still a few out there. ALTER TABLE statements must be evaluated very carefully. Frankly ALTER TABLE is a bit notorious for being unclear about which incantations cause table rewrites and which ones don’t. (I have a friend who just tests every specific ALTER TABLE operation first on an empty table and then checks if pg_class changes show that a rewrite happened.)

 

What am I missing? What other ideas are out there for executing SQL in Postgres with a “paranoid” level of safety?

Source: Jeremy from ardentperf.com

Tables in PostgreSQL have an index for the primary key and separate row storage called the heap. Other databases integrate these together or support “index-organized tables”. In this arrangement, the primary key lookup process leads directly to the row data without a secondary fetch to get the full row and the requisite additional CPU and I/O utilization.

The CLUSTER command in PostgreSQL reorganizes a table according to an index to improve performance, but doesn’t really work for most real-world OLTP cases. It rewrites the entire table under an exclusive lock, blocking any reads or writes. PostgreSQL doesn’t maintain the clustered layout for new data, so this operation must be ran periodically. So it is really only useful if you can take your database offline for long periods of time on a regular basis.

But more critically, index-organized tables save space as the index doesn’t require a separate copy of the row data. For tables with small rows that are mostly covered by the primary key, such as join tables, this can easily cut the table’s storage footprint in half.

Consider the following table which stores social “likes” for arbitrary objects:

CREATE TABLE likes (
 object_type INTEGER NOT NULL,
 object_id BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY,
 user_id BIGINT NOT NULL,
 created_at TIMESTAMP WITH TIME ZONE NOT NULL,
 PRIMARY KEY(object_type, object_id, user_id)
);

PostgreSQL will maintain an index for the primary key which is separate from the base table storage. This index will contain a full copy of the object_type, object_id, and user_id columns for every row. 20 out of 28 bytes in each row (~70%) will be duplicated. If PostgreSQL supported index-organized tables, it wouldn’t consume all that additional space.

Fuente: medium.com/@rbranson/10-things-i-hate-about-postgresql-20dbab8c2791

Would you like to learn how to monitor a PostgreSQL server using Zabbix? In this tutorial, we are going to show you how to use Zabbix to monitor a PostgreSQL server.

Zabbix Playlist:

On this page, we offer quick access to a list of videos related to Zabbix installation.

Don’t forget to subscribe to our youtube channel named FKIT.

Configure a PostgreSQL user to Zabbix

• IP – 192.168.15.11
• Operational System – Ubuntu 19.10
• Hostname – POSTGRESQL

Zabbix requires an account to access the PostgreSQL server and perform the database monitoring.

On the PostgreSQL database server, create an account to Zabbix.

Give monitoring privilege over all PostgreSQL databases to the Zabbix user.

Copy to Clipboard

Take note of the PostgreSQL username and password created.

Locate and edit the pg_hba.conf configuration file.

Copy to Clipboard

Add the following lines at the beginning of this file.

Copy to Clipboard

Here is our pg_hba.conf file content:

Copy to Clipboard

Restart the PostgreSQL service.

Copy to Clipboard

Test if you are able to connect to the local PostgreSQL server using the zbx_monitor account.

The system should not ask for a password.

Copy to Clipboard

You have finished the PostgreSQL service configuration.

Install the Zabbix Agent on the PostgreSQL Server

• IP – 192.168.15.11
• Operational System – Ubuntu 19.10
• Hostname – POSTGRESQL

Now, we need to install the Zabbix agent on the computer running the PostgreSQL service.

On the Linux console, use the following commands to install the required packages.

Copy to Clipboard

Download the Zabbix installation package.

Copy to Clipboard

Extract the Zabbix installation package, compile and install the Zabbix agent.

Copy to Clipboard

Find the location of the zabbix_agentd.conf file on your system.

Edit the zabbix_agentd.conf file.

Copy to Clipboard

Here is the original file, before our configuration.

Copy to Clipboard

Here is the new file with our configuration.

Copy to Clipboard

In our example, the Zabbix agent is configured to allow the connection from the Zabbix server 192.168.15.10.

The Zabbix server with the IP address 192.168.15.10 is allowed to request and receive information from this agent.

The Localhost, 127.0.0.1, is allowed to request and receive information from the agent.

Create the following required directories.

Copy to Clipboard

Download the Zabbix source code from Github.

Copy to Clipboard

Copy the required files to monitor the PostgreSQL server from the Zabbix repository.

Copy to Clipboard

The Zabbix installation package comes with a service startup script.

Copy the startup script using the commands below.

Copy to Clipboard

You can now use the following commands to start the Zabbix agent service.

Copy to Clipboard

In our example, the Zabbix agent will monitor the PostgreSQL service installed on the localhost.

Restart the Zabbix Agent.

Copy to Clipboard

You have finished the Zabbix agent installation.

You can now use the Zabbix server dashboard to add this computer to the network monitoring service.

Zabbix Monitor PostgreSQL

Access the Zabbix server dashboard and add the Linux computer running PostgreSQL as a Host.

Open your browser and enter the IP address of your web server plus /zabbix.

In our example, the following URL was entered in the Browser:

• http://192.168.15.10/zabbix

On the login screen, use the default username and default password.

• Default Username: Admin
• Default Password: zabbix

zabbix login

After a successful login, you will be sent to the Zabbix Dashboard.

Zabbix 4.4 dashboard

On the dashboard screen, access the Configuration menu and select the Host option.

zabbix add host

On the top right of the screen, click on the Create host button.

Zabbix Create Host

Enter the following information:

• Host Name – Enter a Hostname to identify the Linux server running PostgreSQL.
• Visible Hostname – Repeat the hostname.
• Group – Select the name of a group to identify similar devices.
• Agent Interface – Enter the IP address of the Linux server.

Here is the original image, before our configuration.

zabbix linux - Antes

Here is the new image with our configuration.

zabbix monitor postgresql

Next, we need to associate the host with a specific network monitor template.

By default, Zabbix comes with a large variety of monitoring templates.

Access the Templates tab on the top of the screen.

Click on the Select button and locate the template named: Template DB PostgreSQL

zabbix monitoring postgresql

Click on the Add option.

Click on the Add button.

After a few minutes, you will be able to see the initial result on the Zabbix Dashboard.

In order to test your configuration, access the Monitoring menu and click on the Graphs option.

Zabbix Graphs Menu

On the top right of the screen, select the group named ALL.

Select your PostgreSQL computer hostname.

Select the graph named: PostgreSQL connections

monitor postgresql server

You should be able to see the graphic named PostgreSQL connections

postgresql monitor zabbix

Congratulations! You have configured the Zabbix server to monitor PostgreSQL.

Desde https://techexpert.tips/zabbix/monitor-postgresql-using-zabbix/

Many developers still install a database on their computer and work on it.
It may be a preferable method, but using Docker takes only seconds to set up a database. I used PostgreSQL and Docker in my latest project. As a result, I have achieved efficiency and convenience. I will give you my experience with PostgreSQL and Docker.
I can start with finding the Docker image we will use. There are two easy ways to do this. I prefer to visit the Docker Hub website. Alternatively, it can be used the command below. It will list you similar results to the Docker Hub.
docker search postgresql
I will use the postgres, the official image can be obtained in the search results. I prepared a Docker command with important and necessary parameters. It can be easily customized and used this command.
docker run --name postgresqldb -e POSTGRES_USER=myusername -e POSTGRES_PASSWORD=mypassword -p 5432:5432 -v /data:/var/lib/postgresql/data -d postgres
It looks a little confused, but don’t worry. I will explain these parameters.
  • –name : I set a name for the container.
  • -e POSTGRES_USER : I set the database username.
  • -e POSTGRES_PASSWORD : I set the database password.
  • -p : 5432 port is used by default. But it can be changed this according to preference. Local port and container port can be set separately. The first port is for local, and the second is for container.
  • -v : This parameter is related to the Docker Volume feature. This is important for not losing data. With this feature, I synchronize with the folder containing the data in the database and a folder of my choice on the computer or server. When the container is closed or restarted for any reason, it can be easily started with the same data. The first path can be any folder you prefer. I set it to /data as an example. Please do not change the second path because data is in this path by default.
  • -d : I will run detached mode(background mode). If you run it without using it, the docker container will be stopped when you press the Ctrl + C combination or close the terminal screen. This is undesirable because the container should be running in the background.
  • postgres : Image name.
Let’s run the command together and see the result.
If you have encountered a similar output, you can be sure that everything is fine. You can see the working Docker containers by running the command below.
docker ps
You can also stop or restart your container using start and stop commands. In this example I set the container name as postgresqldb. You can run the start and stop commands with your own chosen name.
docker start postgresqldb
docker stop postgresqldb
You may think it took a long time because I explained the details. Don’t be biased, you can create a PostgreSQL database just by running a single Docker command.

 

Tomado de https://hackernoon.com/how-to-install-postgresql-with-docker-quickly-ki7g363m

I’ve installed 19.04 and removed dsnmasq, disabled systemd-resolve using

sudo systemctl disable systemd-resolved.service
sudo systemctl stop systemd-resolved

and want to use Network Manager to set manually my DNS servers. Now I want to configure a different DNS server, so I went to the network icon on the bar and clicked Wired -> Wired Settings -> Configure -> IPV4, select Automatic (DHCP) and disable DNS as automatic, putting my desired DNS servers. Even after rebooting, my /etc/resolv.conf file now only gets 127.0.0.53, not matter what I do on the Nettwork Manager configuration. Is there a way for the /etc/resolv.conf file to be configured by Network manager?

I’ve found a solution.

First, disable systemd-resolve:

sudo systemctl disable systemd-resolved.service
sudo systemctl stop systemd-resolved

Remove the symlink for resolv.conf and create a file:

rm -rf /etc/resolv.conf
touch /etc/resolv.conf
chmod a+rw /etc/resolv.conf

Now change the configuration file for Network Manager to restore the expected result: vi /etc/NetworkManager/NetworkManager.conf:

[main]
plugins=ifupdown,keyfile
# add this line
dns=default

and leave the rest of the file as is. Restart Network Manager:

sudo /etc/init.d/network-manager restart

and configure your preferred DNS servers. Done!

https://askubuntu.com/questions/1159084/make-dns-follow-configuration-from-network-manager-in-19-04

Aca mis pruebas, abajo la original:

yum install wal2json

create database daftest;
daftest=# CREATE TABLE test_table (
id char(10) NOT NULL,
code char(10),
PRIMARY KEY (id)
);


export PATH="$PATH:/usr/pgsql-12/bin"
pg_recvlogical
pg_recvlogical: error: no slot specified

pg_recvlogical -d daftest --slot test_slot --create-slot -P wal2json
pg_recvlogical -d daftest --slot test_slot --start -o pretty-print=1 -f - ## Va a quedar como esperando algo...

En una segunda terminal ejecuto: 
daftest=# insert into test_table (id,code) select 1, 'Hola';
INSERT 0 1

En la primera se ve:

{
"change": [
{
"kind": "insert",
"schema": "public",
"table": "test_table",
"columnnames": ["id", "code"],
"columntypes": ["character(10)", "character(10)"],
"columnvalues": ["1 ", "Hola "]
}
]
}

En la 2da: daftest=# update test_table set code = 'Chau' where id = '1';
UPDATE 1

En la 1ra:
{
"change": [
{
"kind": "update",
"schema": "public",
"table": "test_table",
"columnnames": ["id", "code"],
"columntypes": ["character(10)", "character(10)"],
"columnvalues": ["1 ", "Chau "],
"oldkeys": {
"keynames": ["id"],
"keytypes": ["character(10)"],
"keyvalues": ["1 "]
}
}
]
}

En la 2da: daftest=# delete from test_table where id = '1';
DELETE 1

En la 1ra:
{
"change": [
{
"kind": "delete",
"schema": "public",
"table": "test_table",
"oldkeys": {
"keynames": ["id"],
"keytypes": ["character(10)"],
"keyvalues": ["1 "]
}
}
]
}

ctrl+C
pg_recvlogical -d daftest --slot test_slot --drop-slot

Logical Decoding Output Plug-in Installation for PostgreSQL

This document describes the database setup required for streaming data changes out of PostgreSQL. This comprises configuration applying to the database itself as well as the installation of the wal2json logical decoding output plug-in. The installation and the tests are performed at the following environment/configuration:

Similar steps need to be taken for other Postgres and OS versions and the Decoderbufs logical decoding plug-in which also is supported by Debezium.

As of Debezium 0.10, the connector supports PostgreSQL 10+ logical replication streaming using pgoutput. This means that a logical decoding output plug-in is no longer necessary and changes can be emitted directly from the replication stream by the connector.

Logical Decoding Plug-ins

Logical decoding is the process of extracting all persistent changes to a database’s tables into a coherent, easy to understand format which can be interpreted without detailed knowledge of the database’s internal state.

As of PostgreSQL 9.4, logical decoding is implemented by decoding the contents of the write-ahead log, which describe changes on a storage level, into an application-specific form such as a stream of tuples or SQL statements. In the context of logical replication, a slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database. The output plug-ins transform the data from the write-ahead log’s internal representation into the format the consumer of a replication slot desires. Plug-ins are written in C, compiled, and installed on the machine which runs the PostgreSQL server, and they use a number of PostgreSQL specific APIs, as described by the PostgreSQL documentation.

Debezium’s PostgreSQL connector works with one of Debezium’s supported logical decoding plug-ins,

to encode the changes in either Protobuf format or JSON format.

For simplicity, Debezium also provides a Docker image based on a vanilla PostgreSQL server image on top of which it compiles and installs the plug-ins.

The Debezium logical decoding plug-ins have only been installed and tested on Linux machines. For Windows and other platforms it may require different installation steps

Differences between Plug-ins

The plug-ins’ behaviour is not completely same for all cases. So far these differences have been identified

  • wal2json plug-in is not able to process quoted identifiers (issue)
  • wal2json plug-in does not emit events for tables without primary keys
  • wal2json plug-in does not support special values (NaN or infinity) for floating point types

All up-to-date differences are tracked in a test suite Java class.

More information about the logical decoding and output plug-ins can be found at:

Installation

At the current installation example, the wal2json output plug-in for logical decoding is used. The wal2json output plug-in produces a JSON object per transaction. All of the new/old tuples are available in the JSON object. The plug-in compilation and installation is performed by executing the related commands extracted from the Debezium docker image file.

Before executing the commands, make sure that the user has the privileges to write the wal2json library at the PostgreSQL lib directory (at the test environment, the directory is: /usr/pgsql-9.6/lib/). Also note that the installation process requires the PostgreSQL utility pg_config. Verify that the PATH environment variable is set so as the utility can be found. If not, update the PATH environment variable appropriately. For example at the test environment:

export PATH="$PATH:/usr/pgsql-9.6/bin"
wal2json installation commands
$ git clone https://github.com/eulerto/wal2json -b master --single-branch \
&& cd wal2json \
&& git checkout d2b7fef021c46e0d429f2c1768de361069e58696 \
&& make && make install \
&& cd .. \
&& rm -rf wal2json
wal2json installation output
Cloning into 'wal2json'...
remote: Counting objects: 445, done.
remote: Total 445 (delta 0), reused 0 (delta 0), pack-reused 445
Receiving objects: 100% (445/445), 180.70 KiB | 0 bytes/s, done.
Resolving deltas: 100% (317/317), done.
Note: checking out 'd2b7fef021c46e0d429f2c1768de361069e58696'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b new_branch_name

HEAD is now at d2b7fef... Improve style
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -I. -I./ -I/usr/pgsql-9.6/include/server -I/usr/pgsql-9.6/include/internal -D_GNU_SOURCE -I/usr/include/libxml2  -I/usr/include  -c -o wal2json.o wal2json.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fPIC -L/usr/pgsql-9.6/lib -Wl,--as-needed  -L/usr/lib64 -Wl,--as-needed -Wl,-rpath,'/usr/pgsql-9.6/lib',--enable-new-dtags  -shared -o wal2json.so wal2json.o
/usr/bin/mkdir -p '/usr/pgsql-9.6/lib'
/usr/bin/install -c -m 755  wal2json.so '/usr/pgsql-9.6/lib/'

Installation on Fedora 30+

Debezium provides RPM package for Fedora operating system too. The package is updated always after a final Debezium release is done. To use the RPM in question just issue the standard Fedora installation command:

$ sudo dnf -y install postgres-decoderbufs

The rest of the configuration is same as described below for wal2json plugin.

PostgreSQL Server Configuration

Once the wal2json plug-in has been installed, the database server should be configured.

Setting up libraries, WAL and replication parameters

Add the following lines at the end of the postgresql.conf PostgreSQL configuration file in order to include the plug-in at the shared libraries and to adjust some WAL and streaming replication settings. The configuration is extracted from postgresql.conf.sample. You may need to modify it, if for example you have additionally installed shared_preload_libraries.

postgresql.conf , configuration file parameters settings
############ REPLICATION ##############
# MODULES
shared_preload_libraries = 'wal2json'   

# REPLICATION
wal_level = logical                     
max_wal_senders = 4                     
max_replication_slots = 4               
tells the server that it should load at startup the wal2json (use decoderbufs for protobuf) logical decoding plug-in(s) (the names of the plug-ins are set in protobuf and wal2json Makefiles)
tells the server that it should use logical decoding with the write-ahead log
tells the server that it should use a maximum of 4 separate processes for processing WAL changes
tells the server that it should allow a maximum of 4 replication slots to be created for streaming WAL changes

Debezium needs a PostgreSQL’s WAL to be kept during Debezium outages. If your WAL retention is too small and outages too long, then Debezium will not be able to recover after restart as it will miss part of the data changes. The usual indicator is an error similar to this thrown during the startup: ERROR: requested WAL segment 000000010000000000000001 has already been removed.

When this happens then it is necessary to re-execute the snapshot of the database. We also recommend to set parameter wal_keep_segments = 0. Please follow PostgreSQL official documentation for fine-tuning of WAL retention.

We strongly recommend reading and understanding the official documentation regarding the mechanics and configuration of the PostgreSQL write-ahead log.

Setting up replication permissions

Replication can only be performed by a database user that has appropriate permissions and only for a configured number of hosts. In order to give a user replication permissions, define a PostgreSQL role that has at least the REPLICATION and LOGIN permissions. For example:

CREATE ROLE name REPLICATION LOGIN;

Superusers have by default both of the above roles.

Add the following lines at the end of the pg_hba.conf PostgreSQL configuration file, so as to configure the client authentication for the database replication. The PostgreSQL server should allow replication to take place between the server machine and the host on which the Debezium PostgreSQL connector is running.

Note that the authentication refers to the database superuser postgres. You may change this accordingly, if some other user with REPLICATION and LOGIN permissions has been created.

pg_hba.conf , configuration file parameters settings
############ REPLICATION ##############
local   replication     postgres                          trust		
host    replication     postgres  127.0.0.1/32            trust		
host    replication     postgres  ::1/128                 trust		
tells the server to allow replication for postgres locally (i.e. on the server machine)
tells the server to allow postgres on localhost to receive replication changes using IPV4
tells the server to allow postgres on localhost to receive replication changes using IPV6

See the PostgreSQL documentation for more information on network masks.

Database Test Environment Set-up

For the testing purposes, a database named test with a table named test_table are created with the following DDL commands:

Database SQL commands for test database/table creation
CREATE DATABASE test;

CREATE TABLE test_table (
    id char(10) NOT NULL,
    code        char(10),
    PRIMARY KEY (id)
);

Decoding Output Plug-in Test

Test that the wal2json is working properly by obtaining the test_table changes using the pg_recvlogical PostgreSQL client application that controls PostgreSQL logical decoding streams.

Before starting make sure that you have logged in as a user with database replication permissions, as configured at a previous step. Otherwise, the slot creation and streaming fails with the following error message:

pg_recvlogical: could not connect to server: FATAL:  no pg_hba.conf entry for replication connection from host "[local]", user "root", SSL off

At the test environment, the user with replication permission is the postgres.

Also, make sure that the PATH environment variable is set so as the pg_recvlogical can be found. If not, update the PATH environment variable appropriately. For example at the test environment:

export PATH="$PATH:/usr/pgsql-9.6/bin"
  • Create a slot named test_slot for the database named test, using the logical output plug-in wal2json
$ pg_recvlogical -d test --slot test_slot --create-slot -P wal2json
  • Begin streaming changes from the logical replication slot test_slot for the database test
$ pg_recvlogical -d test --slot test_slot --start -o pretty-print=1 -f -
  • Perform some basic DML operations at test_table to trigger INSERT/UPDATE/DELETE change events
Interactive PostgreSQL terminal, SQL commands
test=# INSERT INTO test_table (id, code) VALUES('id1', 'code1');
INSERT 0 1
test=# update test_table set code='code2' where id='id1';
UPDATE 1
test=# delete from test_table where id='id1';
DELETE 1

Upon the INSERT, UPDATE and DELETE events, the wal2json plug-in outputs the table changes as captured by pg_recvlogical.

Output for INSERT event
{
  "change": [
    {
      "kind": "insert",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code1     "]
    }
  ]
}
Output for UPDATE event
{
  "change": [
    {
      "kind": "update",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code2     "],
      "oldkeys": {
        "keynames": ["id"],
        "keytypes": ["character(10)"],
        "keyvalues": ["id1       "]
      }
    }
  ]
}
Output for DELETE event
{
  "change": [
    {
      "kind": "delete",
      "schema": "public",
      "table": "test_table",
      "oldkeys": {
        "keynames": ["id"],
        "keytypes": ["character(10)"],
        "keyvalues": ["id1       "]
      }
    }
  ]
}

Note that the REPLICA IDENTITY of the table test_table is set to DEFAULT.

When the test is finished, the slot test_slot for the database test can be removed by the following command:

$ pg_recvlogical -d test --slot test_slot --drop-slot

REPLICA IDENTITY, is a PostgreSQL specific table-level setting which determines the amount of information that is available to logical decoding in case of UPDATE and DELETE events.

There are 4 possible values for REPLICA IDENTITY:

  • DEFAULTUPDATE and DELETE events will only contain the previous values for the primary key columns of a table
  • NOTHINGUPDATE and DELETE events will not contain any information about the previous value on any of the table columns
  • FULLUPDATE and DELETE events will contain the previous values of all the table’s columns
  • INDEX index nameUPDATE and DELETE events will contains the previous values of the columns contained in the index definition named index name

You can modify and check the replica REPLICA IDENTITY for a table with the following commands:

ALTER TABLE test_table REPLICA IDENTITY FULL;
test=# \d+ test_table
                         Table "public.test_table"
 Column |     Type      | Modifiers | Storage  | Stats target | Description
 -------+---------------+-----------+----------+--------------+------------
 id     | character(10) | not null  | extended |              |
 code   | character(10) |           | extended |              |
Indexes:
    "test_table_pkey" PRIMARY KEY, btree (id)
Replica Identity: FULL

Here is the output of wal2json plug-in on DELETE event and REPLICA IDENTITY set to FULL. Compare with the respective output when REPLICA IDENTITY is set to DEFAULT.

Output for `UPDATE`
{
  "change": [
    {
      "kind": "update",
      "schema": "public",
      "table": "test_table",
      "columnnames": ["id", "code"],
      "columntypes": ["character(10)", "character(10)"],
      "columnvalues": ["id1       ", "code2     "],
      "oldkeys": {
        "keynames": ["id", "code"],
        "keytypes": ["character(10)", "character(10)"],
        "keyvalues": ["id1       ", "code1     "]
      }
    }
  ]
}


https://debezium.io/documentation/reference/0.10/postgres-plugins.html