Hi all,
Im currently working on the integration of logstash+opensearch, im trying to ingest data from my PostgreSQL and trying to maintain in sync my DB with OpenSearch with the following config.
input {
jdbc {
jdbc_connection_string => “jdbc:postgresql://host.docker.internal:5432/orders”
jdbc_user => “user”
jdbc_password => “pass”
schedule => “*/2 * * * *”
jdbc_driver_class => “org.postgresql.Driver”
clean_run => false
jdbc_paging_enabled => false
tracking_column => “modification_time”
use_column_value => true
tracking_column_type => “timestamp”
statement => “select distinct on (ob.order_id)
modification_time,
ob.order_timestamp ,
ob.order_id,
opt.type,
obs.status
from
order ob
join order_p opt on
ob.id = opt.order_id
join order_s obs on ob.id = obs.order_id
WHERE (modification_time > :sql_last_value AND modification_time < NOW())
and ob.client_id in(select distinct (client_id) from order) order by ob.order_id, obs.timestamp desc”
}
} filter {
mutate {
remove_field => [“@timestamp”, “@version”, “modification_time”]
}
}output {
opensearch {
hosts => [“https://host.docker.internal:9200”]
index => “order_test”
user => “admin”
password => “admin”
document_id => “%{order_id}”
ssl => true
ssl_certificate_verification => false
}
}
the behaviour is not as expected for the sql_last_value field, when the value of the modification_time is updated sql_last_value is not taking the value
the trace seems blocked in a specific time for example
first iteration → ‘1970-01-01 00:00:00.000000000’
second iteration → ‘2021-11-25 17:00:00.000000000’
infinite iterations here → ‘2021-11-26 12:05:00.000000000’
but I have 3 new rows with more actual values that he is not catching automatically, sometimes if I restart the container it takes the last value but if I add new rows the problem persist.
thanks all!