[Résolu] - Sauvegarde bm4.7: Problèmes, dossier /var/backups/bluemind a seulement quelques M

Bonjour à tous.
Je viens de finaliser l’installation de mon bluemind (avec spamassassin, opendkim, amavis, etc…) et de finir de migrer mes données depuis Zimbra. J’ai tout mes anciens mails, je peux envoyer et recevoir des mails sans soucis, et le connecteur Bluemind sur Thunderbird marche. J’ai même paramétrer l’addon nextcloud pour le serveur bluemind et tout est ok.
Là où ça merde, c’est pour les sauvegardes. Apparemment elle ne se font pas …

root@hal-bluemind:~# du -sh /var/backups/bluemind/
6,3M    /var/backups/bluemind/

Je ne sais pas trop par où chercher pour que ça marche.
le dossier /var/backups/bluemind est monté depuis un partage NFS sur une autre machine.

Quelqu’un pourrait-il m’aider ?
Je voulais vous mettre en fichier joint mon fichier core.log anonymiser, mais on peut upload que des images, et je voudrais pas faire un copier/coller ici d’un fichier de plusieurs 10k lignes…

Je vous mets donc quelques morceaux où il y a FAILURE/ERROR

2022-07-24 01:00:01,889 [pool-9-thread-2] n.b.s.s.i.Scheduler ERROR - finished with FAILURE status called from here
java.lang.Throwable: sched.finish(FAILURE)
        at net.bluemind.scheduledjob.scheduler.impl.Scheduler.finish(Scheduler.java:152)
        at net.bluemind.dataprotect.job.DataProtectJob.tick(DataProtectJob.java:125)
        at net.bluemind.scheduledjob.scheduler.impl.JobTicker.run(JobTicker.java:71)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2022-07-24 01:00:01,898 [pool-8-thread-3] n.b.s.s.i.Scheduler INFO - [RunIdImpl [domainUid=global.virt, jid=DataProtect, startTime=1658624401268, endTime=1658624401895, groupId=3f331ada-9fc9-48f4-8f7f-7b1e4fc26deb, status=FAILURE]] finished and recorded: FAILURE, duration: 627ms.

2022-07-25 01:02:49,560 [pool-9-thread-4] n.b.s.s.i.Scheduler ERROR - finished with FAILURE status called from here
java.lang.Throwable: sched.finish(FAILURE)
        at net.bluemind.scheduledjob.scheduler.impl.Scheduler.finish(Scheduler.java:152)
        at net.bluemind.dataprotect.job.DataProtectJob.tick(DataProtectJob.java:125)
        at net.bluemind.scheduledjob.scheduler.impl.JobTicker.run(JobTicker.java:71)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
2022-07-25 01:02:49,567 [pool-8-thread-1] n.b.s.s.i.Scheduler INFO - [RunIdImpl [domainUid=global.virt, jid=DataProtect, startTime=1658710857895, endTime=1658710969561, groupId=cfeaf9c8-a0b3-4854-aeb0-b855f026df4d, status=FAILURE]] finished and recorded: FAILURE, duration: 111666ms.
2022-07-24 07:14:16,901 [BM-Core-27] n.b.u.s.i.TokenAuthProvider ERROR - Fail to validate token for admin0 from [127.0.1.1, IP-freebox, IP-srv-save, 127.0.0.1, IP-srvbluemind]
2022-07-24 07:14:16,952 [BM-Core-27] n.b.a.s.Authentication INFO - login: 'admin0@global.virt', origin: 'bm-hps', from: '[127.0.1.1, IP-freebox, IP-srv-save, 127.0.0.1, IP-srvbluemind]' successfully authentified (status: Ok)
2022-07-24 07:14:18,235 [BM-Core-3] n.b.c.c.s.i.ContainerStoreService WARN - null value for existing item Item{id: 7, uid: admin0_global.virt, dn: admin0 admin0, v: 12} with store net.bluemind.mailbox.persistence.MailboxStore@173bf279
2022-07-24 07:19:18,129 [vert.x-eventloop-thread-4] n.b.c.r.s.v.RestSockJSProxyServer ERROR - error in sock io.vertx.ext.web.handler.sockjs.impl.SockJSSession@e47b5e7: {}
io.vertx.core.http.HttpClosedException: Connection was closed
2022-07-24 07:21:31,625 [core-heartbeat-timer] n.b.s.s.StateContext INFO - Core state heartbeat : core.state.running
2022-07-24 07:21:33,639 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.async.thread-6] n.b.m.c.s.ProductChecksService ERROR - [Autodiscover@bm-mapi] Status CRIT (null)
2022-07-24 07:21:34,199 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.IO.thread-in-2] c.h.n.t.TcpIpConnection INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Connection[id=10, /IP-srvbluemind:57
01->/IP-srvbluemind:35385, qualifier=null, endpoint=[IP-srvbluemind]:35385, alive=false, type=JAVA_CLIENT] closed. Reason: Connection closed by the other side
2022-07-24 07:21:34,204 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.event-1] c.h.c.i.ClientEndpointManager INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Destroying ClientEndpoint{connection=
Connection[id=10, /IP-srvbluemind:5701->/IP-srvbluemind:35385, qualifier=null, endpoint=[IP-srvbluemind]:35385, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='e28324d7-78e1-4f22-ba39-7730dc88f69a', ownerUuid='510d67c5-
9b23-4732-b142-7e52cc9734cd'}, ownerConnection=true, authenticated=true, clientVersion=3.12.12, creationTime=1658647282379, latest statistics=null}
2022-07-24 07:21:35,624 [core-heartbeat-timer] n.b.s.s.StateContext INFO - Core state heartbeat : core.state.running
2022-07-24 07:26:23,271 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.partition-operation.thread-2] c.h.r.i.o.ReadManyOperation ERROR - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] sequence:2 is too large. The current tailSequence is:-1
java.lang.IllegalArgumentException: sequence:2 is too large. The current tailSequence is:-1
        at com.hazelcast.ringbuffer.impl.RingbufferContainer.checkBlockableReadSequence(RingbufferContainer.java:454)
        at com.hazelcast.ringbuffer.impl.operations.ReadManyOperation.beforeRun(ReadManyOperation.java:58)
        at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:147)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:125)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
2022-07-24 07:26:23,291 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.event-2] c.h.c.i.ClientEndpointManager INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Destroying ClientEndpoint{connection=Connection[id=2, /IP-srvbluemind:5701->/IP-srvbluemind:36279, qualifier=null, endpoint=[IP-srvbluemind]:36279, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='ea8f078a-3793-4912-949f-982e7080f118', ownerUuid='510d67c5-9b23-4732-b142-7e52cc9734cd'}, ownerConnection=true, authenticated=true, clientVersion=3.12.12, creationTime=1658574751873, latest statistics=null}
2022-07-24 07:26:23,433 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.partition-operation.thread-6] c.h.r.i.o.ReadManyOperation ERROR - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] sequence:11 is too large. The current tailSequence is:-1
java.lang.IllegalArgumentException: sequence:11 is too large. The current tailSequence is:-1
        at com.hazelcast.ringbuffer.impl.RingbufferContainer.checkBlockableReadSequence(RingbufferContainer.java:454)
        at com.hazelcast.ringbuffer.impl.operations.ReadManyOperation.beforeRun(ReadManyOperation.java:58)
        at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:197)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.run(OperationExecutorImpl.java:408)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl.runOrExecute(OperationExecutorImpl.java:435)
        at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvokeLocal(Invocation.java:648)
        at com.hazelcast.spi.impl.operationservice.impl.Invocation.doInvoke(Invocation.java:633)
        at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke0(Invocation.java:592)
        at com.hazelcast.spi.impl.operationservice.impl.Invocation.invoke(Invocation.java:256)
        at com.hazelcast.spi.impl.operationservice.impl.InvocationBuilderImpl.invoke(InvocationBuilderImpl.java:61)
        at com.hazelcast.client.impl.protocol.task.AbstractPartitionMessageTask.processMessage(AbstractPartitionMessageTask.java:67)
        at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.initializeAndProcessMessage(AbstractMessageTask.java:137)
        at com.hazelcast.client.impl.protocol.task.AbstractMessageTask.run(AbstractMessageTask.java:117)
        at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:163)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:159)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:127)
        at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110)
2022-07-24 07:26:23,624 [core-heartbeat-timer] n.b.s.s.StateContext INFO - Core state heartbeat : core.state.running
2022-07-24 07:26:23,694 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.IO.thread-in-1] c.h.n.t.TcpIpConnection WARN - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Connection[id=4, /IP-srvbluemind:5701->/IP-srvbluemind:44983, qualifier=null, endpoint=[IP-srvbluemind]:44983, alive=false, type=JAVA_CLIENT] closed. Reason: Exception in Connection[id=4, /IP-srvbluemind:5701->/IP-srvbluemind:44983, qualifier=null, endpoint=[IP-srvbluemind]:44983, alive=true, type=JAVA_CLIENT], thread=hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.IO.thread-in-1
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
        at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
        at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
        at sun.nio.ch.IOUtil.read(IOUtil.java:197)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
        at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:113)
        at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:369)
        at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:354)
        at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:280)
        at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:235)
2022-07-24 07:26:23,695 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.event-3] c.h.c.i.ClientEndpointManager INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Destroying ClientEndpoint{connection=Connection[id=4, /IP-srvbluemind:5701->/IP-srvbluemind:44983, qualifier=null, endpoint=[IP-srvbluemind]:44983, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='0fe70a0c-9e2f-44fe-ba60-d8e31f468d91', ownerUuid='510d67c5-9b23-4732-b142-7e52cc9734cd'}, ownerConnection=true, authenticated=true, clientVersion=3.12.12, creationTime=1658574756254, latest statistics=null}
2022-07-24 07:26:24,179 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.IO.thread-in-1] c.h.n.t.TcpIpConnection INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Connection[id=3, /IP-srvbluemind:5701->/IP-srvbluemind:41823, qualifier=null, endpoint=[IP-srvbluemind]:41823, alive=false, type=JAVA_CLIENT] closed. Reason: Connection closed by the other side
2022-07-24 07:26:24,180 [hz.bm-core-b455fcc9-df3a-434f-aff9-86d02876a16e.event-3] c.h.c.i.ClientEndpointManager INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Destroying ClientEndpoint{connection=Connection[id=3, /IP-srvbluemind:5701->/IP-srvbluemind:41823, qualifier=null, endpoint=[IP-srvbluemind]:41823, alive=false, type=JAVA_CLIENT], principal='ClientPrincipal{uuid='2222756a-94ac-4e39-b968-ceff7b9fa027', ownerUuid='510d67c5-9b23-4732-b142-7e52cc9734cd'}, ownerConnection=true, authenticated=true, clientVersion=3.12.12, creationTime=1658574756129, latest statistics=null}
2022-07-24 07:26:24,203 [hz.ShutdownThread] c.h.i.Node INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Running shutdown hook... Current state: ACTIVE
2022-07-24 07:26:24,204 [Thread-7] n.b.a.l.ApplicationLauncher INFO - Stopping BlueMind Core...
2022-07-24 07:26:24,204 [Thread-7] n.b.s.s.StateContext INFO - Core state transition from core.state.running to core.stopped
2022-07-24 07:26:24,205 [hz.ShutdownThread] c.h.c.LifecycleService INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] [IP-srvbluemind]:5701 is SHUTTING_DOWN
2022-07-24 07:26:24,206 [hz.ShutdownThread] n.b.h.c.i.ClusterMember INFO - HZ cluster switched to state SHUTTING_DOWN, running: true
2022-07-24 07:26:24,210 [hz.ShutdownThread] c.h.i.Node WARN - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Terminating forcefully...
2022-07-24 07:26:24,210 [vert.x-worker-thread-18] n.b.s.s.i.StateObserverVerticle INFO - New core state is CORE_STATE_STOPPING, cause: BUS_EVENT
2022-07-24 07:26:24,213 [hz.ShutdownThread] c.h.i.Node INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Shutting down connection manager...
2022-07-24 07:26:24,225 [Thread-7] n.b.a.l.ApplicationLauncher INFO - BlueMind Core stopped.
2022-07-24 07:26:24,232 [hz.ShutdownThread] c.h.i.Node INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Shutting down node engine...
2022-07-24 07:26:24,322 [hz.ShutdownThread] c.h.i.NodeExtension INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Destroying node NodeExtension.
2022-07-24 07:26:24,322 [hz.ShutdownThread] c.h.i.Node INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Hazelcast Shutdown is completed in 113 ms.
2022-07-24 07:26:24,327 [hz.ShutdownThread] c.h.c.LifecycleService INFO - [IP-srvbluemind]:5701 [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] [IP-srvbluemind]:5701 is SHUTDOWN
2022-07-24 07:26:24,327 [hz.ShutdownThread] n.b.h.c.i.ClusterMember INFO - HZ cluster switched to state SHUTDOWN, running: false
2022-07-24 07:27:00,472 [Start Level: Equinox Container: 9338cf65-1bc6-4946-986f-a70679b58c76] OSGI INFO - OSGI Log activated
2022-07-24 07:27:00,481 [Start Level: Equinox Container: 9338cf65-1bc6-4946-986f-a70679b58c76] n.b.s.s.SentrySettingsActivator INFO - Sentry settings activator launched
2022-07-24 07:27:00,523 [main] n.b.a.l.ApplicationLauncher INFO - Starting BlueMind Application...
2022-07-24 07:27:00,542 [main] n.b.h.c.MQ WARN - HZ native client is not possible in this JVM, client fragment missing (com.hazelcast.client.config.ClientConfig cannot be found by net.bluemind.hornetq.client_4.1.62053)
2022-07-24 07:27:00,545 [main] n.b.h.c.MQ INFO - HZ cluster member implementation was chosen for bm-core.
2022-07-24 07:27:00,554 [main] n.b.h.c.i.ClusterMember INFO - ************* HZ CONNECT *************
2022-07-24 07:27:00,567 [main] n.b.h.c.i.ClusterMember INFO - HZ setup for net.bluemind.application.launcher.ApplicationLauncher$$Lambda$51/19717364@1fc2b765....
2022-07-24 07:27:00,570 [main] n.b.a.l.ApplicationLauncher INFO - BlueMind Application started
2022-07-24 07:27:00,779 [bm-hz-connect] c.h.i.AddressPicker INFO - [LOCAL] [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Interfaces is disabled, trying to pick one address from TCP-IP config addresses: [IP-srvbluemind]
2022-07-24 07:27:00,779 [bm-hz-connect] c.h.i.AddressPicker INFO - [LOCAL] [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Prefer IPv4 stack is true, prefer IPv6 addresses is false
2022-07-24 07:27:00,796 [bm-hz-connect] c.h.i.AddressPicker INFO - [LOCAL] [bluemind-72D26E8A-5BB1-48A4-BC71-EEE92E0CE4EE] [3.12.12] Picked [IP-srvbluemind]:5701, using socket ServerSocket[addr=/IP-srvbluemind,localport=5701], bind any local is false

Si vous pouvez m’aider ou me donner des pistes par où chercher, ça serait top.
Je vous remercie en tout cas pour vos futures réponses.
Bonne journée.

Bonjour,
Le premier point à vérifier, c’est dans l’interface Bluemind, dans le plannificateur. Tu peux voir les logs du job de sauvegarde en cliquant sur la ligne de la derniere exécution de la sauvegarde :

Merci de ta réponse.
Je n’ai pas ce menu, je peux juste régler le nombre de sauvegarde quotidienne et/ou naviguer dans les sauvegardes (DataProtect) où rien ne se passe (même quand j’appuie sur resynchroniser)



c’est caché dans la gestion du système :slight_smile:

Oh!
Merci, je l’avais pas vu…
Voici le log, il me marque tout en vert, mais ça finit sur une erreur:

25 juil. 2022 01:00:57 - INFO - Démarrage de la sauvegarde

25 juil. 2022 01:00:57 - INFO - 1/6: Backup starting for 1 servers.

25 juil. 2022 01:00:57 - INFO - 1/6: Checking /var/backups/bluemind on each hosts

25 juil. 2022 01:00:58 - INFO - 2/6: /var/backups/bluemind checked on IP_Srv_Bluemind

25 juil. 2022 01:00:58 - INFO - 2/6: Check parent backup generation

25 juil. 2022 01:00:58 - INFO - 3/6: Parent backup generation checked successfully on host: IP_Srv_Bluemind

25 juil. 2022 01:00:58 - INFO - 3/6: Starting backup on all servers

25 juil. 2022 01:00:58 - INFO - 3/6: Backup tags bm/settings,bm/core,bm/redirector,bm/es,bm/xmpp,bm/webmail,bm/pgsql-data,bm/conf,bm/ac,bm/contact,mail/smtp,bm/hps,metrics/influxdb,bm/nginx,bm/pgsql,bm/cal,mail/imap,filehosting/data

25 juil. 2022 01:00:58 - INFO - 3/6: Backup tag bm/settings

25 juil. 2022 01:00:58 - INFO - 3/6: Backup tag bm/core

25 juil. 2022 01:00:58 - INFO - 3/6: Backup tag bm/settings ending

25 juil. 2022 01:01:00 - INFO - 3/6: RSYNC: (permits 7) /usr/bin/rsync --exclude-from=/etc/bm-node/rsync.excludes -rltDH --delete --numeric-ids --relative --delete-excluded /var/backups/bluemind/work/directory/ /var/backups/bluemind/dp_spool/rsync/IP_Srv_Bluemind/bm/core/2/

25 juil. 2022 01:01:00 - INFO - 3/6: Waiting for rsync completions...

25 juil. 2022 01:01:00 - INFO - 3/6: RSYNC: 81853 started.

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/core with worker DirectoryWorker ending

25 juil. 2022 01:01:00 - INFO - 3/6: Waiting for rsync completions...

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/core with worker CyrusSdsWorker ending

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/core ending

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/redirector

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/redirector ending

25 juil. 2022 01:01:00 - INFO - 3/6: Backup tag bm/es

25 juil. 2022 01:01:07 - INFO - 3/6: Waiting for rsync completions...

25 juil. 2022 01:01:07 - INFO - 3/6: RSYNC: (permits 7) /usr/bin/rsync --exclude-from=/etc/bm-node/rsync.excludes -rltDH --delete --numeric-ids --relative --delete-excluded /var/spool/bm-elasticsearch/repo/ /var/backups/bluemind/dp_spool/rsync/IP_Srv_Bluemind/bm/es/4/

25 juil. 2022 01:01:07 - INFO - 3/6: RSYNC: 81894 started.

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/es with worker ElasticWorker ending

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/es ending

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/xmpp

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/xmpp ending

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/webmail

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/webmail ending

25 juil. 2022 01:01:16 - INFO - 3/6: Backup tag bm/pgsql-data

25 juil. 2022 01:01:28 - INFO - 3/6: DUMP: Dump done in /var/backups/bluemind/work/pgsql-data/dump.sql

25 juil. 2022 01:01:28 - INFO - 3/6: RSYNC: (permits 7) /usr/bin/rsync --exclude-from=/etc/bm-node/rsync.excludes -rltDH --delete --numeric-ids --relative --delete-excluded /var/backups/bluemind/work/pgsql-data/ /var/backups/bluemind/dp_spool/rsync/IP_Srv_Bluemind/bm/pgsql-data/5/

25 juil. 2022 01:01:28 - INFO - 3/6: Waiting for rsync completions...

25 juil. 2022 01:01:28 - INFO - 3/6: RSYNC: 82079 started.

25 juil. 2022 01:01:33 - INFO - 3/6: Backup tag bm/pgsql-data with worker PgWorkerBmData ending

25 juil. 2022 01:01:33 - INFO - 3/6: Backup tag bm/pgsql-data ending

25 juil. 2022 01:01:33 - INFO - 3/6: Backup tag bm/conf

25 juil. 2022 01:01:49 - INFO - 3/6: Protect configuration files starting...

25 juil. 2022 01:01:49 - INFO - 3/6: Démarrage de la sauvegarde des fichiers de configuration...

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-core /etc/bm-eas

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-core /etc/bm-eas

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-elasticsearch /etc/bm-hps /etc/bm-lmtpd /etc/bm-mapi /etc/bm-milter /etc/bm-node /etc/bm-php

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-sds-proxy /etc/bm-tick /etc/bm-tika /etc/bm-webmail

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-elasticsearch /etc/bm-hps /etc/bm-lmtpd /etc/bm-mapi /etc/bm-milter /etc/bm-node /etc/bm-php

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-sds-proxy /etc/bm-tick /etc/bm-tika /etc/bm-webmail

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-webserver /etc/bm-xmpp /etc/bm-ysnp /etc/imapd.conf /etc/cyrus.conf /etc/cyrus-partitions

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/bm-webserver /etc/bm-xmpp /etc/bm-ysnp /etc/imapd.conf /etc/cyrus.conf /etc/cyrus-partitions

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/cyrus-admins /etc/postfix

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /etc/cyrus-admins /etc/postfix

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /usr/share/bm-elasticsearch/config/elasticsearch.yml

25 juil. 2022 01:01:50 - INFO - 3/6: configurationFilesProtect: /usr/share/bm-elasticsearch/config/elasticsearch.yml

25 juil. 2022 01:01:50 - INFO - 3/6: RSYNC: (permits 7) /usr/bin/rsync --exclude-from=/etc/bm-node/rsync.excludes -rltDH --delete --numeric-ids --relative --delete-excluded /var/backups/bluemind/work/conf/ /var/backups/bluemind/dp_spool/rsync/IP_Srv_Bluemind/bm/conf/6/

25 juil. 2022 01:01:50 - INFO - 3/6: Waiting for rsync completions...

25 juil. 2022 01:01:50 - INFO - 3/6: RSYNC: 82315 started.

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/conf with worker ConfigWorker ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/conf ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/ac

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/ac ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/contact

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/contact ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag mail/smtp

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag mail/smtp ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/hps

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/hps ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag metrics/influxdb

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag metrics/influxdb ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/nginx

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/nginx ending

25 juil. 2022 01:02:01 - INFO - 3/6: Backup tag bm/pgsql

25 juil. 2022 01:02:27 - INFO - 3/6: DUMP: pg_dump: error: query failed: ERROR: relation "repack.log_18319" does not exist pg_dump: error: query was: LOCK TABLE repack.log_18319 IN ACCESS SHARE MODE

25 juil. 2022 01:02:48 - INFO - 6/6: pg_dump failed with exit code 1

25 juil. 2022 01:02:49 - PROGRESS - #progress 100

Bon, au moins ça m’a permis de voir où je pouvais régler la sauvegarde, et surtout comment la lancer manuellement.
Là, sans rien faire de plus que la lancée manuellement, la sauvegarde a terminé et s’est bien déroulée sans aucune erreur…
A voir du coup si cette nuit, elle se fait automatiquement et se termine bien sans erreur …
Si oui, je marquerai le sujet comme résolu, sinon, je reviendrai poster ici.
En tout cas merci beaucoup @Olivier.Vailleau

Y’a quand même une erreur sur la BDD :

… Mais ca dépasse mes connaissances.
Y’auaris pas un verrous de bdd qui traine ? Un petit reboot pour vérifier ?

J’ai fait un reboot hier soir au cas où, et cette nuit la sauvegarde s’est bien déroulée…
Que s’est-il passé ? → mystère …
En tout cas c’est tombé en marche :wink:
Merci beaucoup pour ton aide et bonne journée :smiley: