Make link validator run against PR's

This commit is contained in:
Matthew de Detrich 2023-06-13 15:19:37 +02:00 committed by Matthew de Detrich
parent c56edca78f
commit 8e60ef5c6d
17 changed files with 53 additions and 35 deletions

View file

@ -1,12 +1,13 @@
name: Link Validator
on:
schedule:
- cron: '0 6 * * 1'
workflow_dispatch:
permissions: {}
on:
pull_request:
workflow_dispatch:
schedule:
- cron: '0 6 * * 1'
jobs:
validate-links:
runs-on: ubuntu-20.04
@ -33,7 +34,7 @@ jobs:
- name: Setup Coursier
uses: coursier/setup-action@v1.3.3
- name: create the Pekko site
- name: Create the Pekko site
run: sbt -Dpekko.genjavadoc.enabled=true "Javaunidoc/doc; Compile/unidoc; docs/paradox"
- name: Run Link Validator

View file

@ -143,7 +143,7 @@ Java
If you find it inconvenient to handle the `CurrentClusterState` you can use
@scala[@scaladoc[ClusterEvent.InitialStateAsEvents](cluster.ClusterEvent$$InitialStateAsEvents$)] @java[@javadoc[ClusterEvent.initialStateAsEvents()](pekko.cluster.ClusterEvent#initialStateAsEvents())] as parameter to @apidoc[subscribe](pekko.cluster.Cluster) {scala="#subscribe(subscriber:org.apache.pekko.actor.ActorRef,to:Class[_]*):Unit" java="#subscribe(org.apache.pekko.actor.ActorRef,org.apache.pekko.cluster.ClusterEvent.SubscriptionInitialStateMode,java.lang.Class...)"}.
@scala[@scaladoc[ClusterEvent.InitialStateAsEvents](org.apache.pekko.cluster.ClusterEvent$$InitialStateAsEvents$)] @java[@javadoc[ClusterEvent.initialStateAsEvents()](pekko.cluster.ClusterEvent#initialStateAsEvents())] as parameter to @apidoc[subscribe](pekko.cluster.Cluster) {scala="#subscribe(subscriber:org.apache.pekko.actor.ActorRef,to:Class[_]*):Unit" java="#subscribe(org.apache.pekko.actor.ActorRef,org.apache.pekko.cluster.ClusterEvent.SubscriptionInitialStateMode,java.lang.Class...)"}.
That means that instead of receiving `CurrentClusterState` as the first message you will receive
the events corresponding to the current state to mimic what you would have seen if you were
listening to the events when they occurred in the past. Note that those initial events only correspond

View file

@ -13,7 +13,7 @@ asynchronous. This effort has been undertaken to ensure that all functions are
available equally when running within a single JVM or on a cluster of hundreds
of machines. The key for enabling this is to go from remote to local by way of
optimization instead of trying to go from local to remote by way of
generalization. See [this classic paper]($pekko.doc.dns$/docs/misc/smli_tr-94-29.pdf)
generalization. See [this classic paper](https://dl.acm.org/doi/pdf/10.5555/974938)
for a detailed discussion on why the second approach is bound to fail.
## Ways in which Transparency is Broken

View file

@ -104,7 +104,7 @@ you require a `ByteString` that only blocks as much memory as necessary for its
get a @apidoc[CompactByteString](util.CompactByteString) instance. If the `ByteString` represented only a slice of the original array, this will
result in copying all bytes in that slice.
`ByteString` inherits all methods from @scaladoc[IndexedSeq](scala.collection.immutable.IndexedSeq), and it also has some new ones. For more information, look up the @apidoc[util.ByteString](util.ByteString) class and @scaladoc[its companion object](util.ByteString$) in the ScalaDoc.
`ByteString` inherits all methods from @scaladoc[IndexedSeq](scala.collection.immutable.IndexedSeq), and it also has some new ones. For more information, look up the @apidoc[util.ByteString](util.ByteString) class and @scaladoc[its companion object](org.apache.pekko.util.ByteString$) in the ScalaDoc.
`ByteString` also comes with its own optimized builder and iterator classes @apidoc[ByteStringBuilder](util.ByteStringBuilder) and
@apidoc[ByteIterator](util.ByteIterator) which provide extra features in addition to those of normal builders and iterators.

View file

@ -7,7 +7,7 @@ Plugins maintained within the Pekko organization are:
* [pekko-persistence-cassandra]($pekko.doc.dns$/docs/pekko-persistence-cassandra/current/) (no Durable State support)
* [pekko-persistence-jdbc]($pekko.doc.dns$/docs/pekko-persistence-jdbc/current/) (Durable State only supported with Postgres and H2)
* [pekko-persistence-r2dbc]($pekko.doc.dns$/docs/pekko-persistence-r2dbc/current/)
* [pekko-persistence-dynamodb]($pekko.doc.dns$/docs/pekko-persistence-dynamodb/current/)
* [pekko-persistence-dynamodb](https://github.com/apache/incubator-pekko-persistence-dynamodb)
Plugins can be selected either by "default" for all persistent actors,
or "individually", when a persistent actor defines its own set of plugins.

View file

@ -659,7 +659,7 @@ See @ref:[Scaling out](typed/persistence.md#scaling-out) in the documentation of
## At-Least-Once Delivery
To send messages with at-least-once delivery semantics to destinations you can @scala[mix-in @scaladoc[AtLeastOnceDelivery](pekko.persistence.AtLeastOnceDelivery) trait to your @scaladoc[PersistentActor](pekko.persistence.PersistentActor)]@java[extend the @javadoc[AbstractPersistentActorWithAtLeastOnceDelivery](pekko.persistence.AbstractPersistentActorWithAtLeastOnceDelivery) class instead of @javadoc[AbstractPersistentActor](pekko.persistence.AbstractPersistentActor)]
To send messages with at-least-once delivery semantics to destinations you can @scala[mix-in @scaladoc[AtLeastOnceDelivery](org.apache.pekko.persistence.AtLeastOnceDelivery) trait to your @scaladoc[PersistentActor](pekko.persistence.PersistentActor)]@java[extend the @javadoc[AbstractPersistentActorWithAtLeastOnceDelivery](pekko.persistence.AbstractPersistentActorWithAtLeastOnceDelivery) class instead of @javadoc[AbstractPersistentActor](pekko.persistence.AbstractPersistentActor)]
on the sending side. It takes care of re-sending messages when they
have not been confirmed within a configurable timeout.
@ -688,7 +688,7 @@ an actor selection.
@@@
Use the @scala[@scaladoc[deliver](persistence.AtLeastOnceDelivery#deliver(destination:org.apache.pekko.actor.ActorPath)(deliveryIdToMessage:Long=%3EAny):Unit)]@java[@javadoc[deliver](pekko.persistence.AbstractPersistentActorWithAtLeastOnceDelivery#deliver(org.apache.pekko.actor.ActorPath,org.apache.pekko.japi.Function))] method to send a message to a destination. Call the @scala[@scaladoc[confirmDelivery](persistence.AtLeastOnceDelivery#confirmDelivery(deliveryId:Long):Boolean)]@java[@javadoc[confirmDelivery](pekko.persistence.AtLeastOnceDeliveryLike#confirmDelivery(long))] method
Use the @scala[@scaladoc[deliver](org.apache.pekko.persistence.AtLeastOnceDelivery#deliver(destination:org.apache.pekko.actor.ActorPath)(deliveryIdToMessage:Long=%3EAny):Unit)]@java[@javadoc[deliver](pekko.persistence.AbstractPersistentActorWithAtLeastOnceDelivery#deliver(org.apache.pekko.actor.ActorPath,org.apache.pekko.japi.Function))] method to send a message to a destination. Call the @scala[@scaladoc[confirmDelivery](org.apache.pekko.persistence.AtLeastOnceDelivery#confirmDelivery(deliveryId:Long):Boolean)]@java[@javadoc[confirmDelivery](pekko.persistence.AtLeastOnceDeliveryLike#confirmDelivery(long))] method
when the destination has replied with a confirmation message.
### Relationship between deliver and confirmDelivery

View file

@ -38,8 +38,8 @@ This project contains several samples illustrating how to use Distributed Data.
## Cluster Sharding
@java[@extref[Sharding example project](samples:pekko-sample-cluster-sharding-java)]
@scala[@extref[Sharding example project](samples:pekko-sample-cluster-sharding-scala)]
@java[@extref[Sharding example project](samples:pekko-sample-sharding-java)]
@scala[@extref[Sharding example project](samples:pekko-sample-sharding-scala)]
This project contains a KillrWeather sample illustrating how to use Cluster Sharding.
@ -80,9 +80,7 @@ This project demonstrates the work pulling pattern using Pekko Cluster.
## Kafka to Cluster Sharding
@extref[Kafka to Cluster Sharding example project](samples:pekko-sample-kafka-to-sharding)
@extref[Kafka to Cluster Sharding example project](samples:pekko-sample-kafka-to-sharding-scala)
This project demonstrates how to use the External Shard Allocation strategy to co-locate the consumption of Kafka
partitions with the shard that processes the messages.

View file

@ -507,7 +507,7 @@ message from network failures and JVM crashes, in addition to graceful terminati
actor.
The heartbeat arrival times is interpreted by an implementation of
[The Phi Accrual Failure Detector](http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf).
[The Phi Accrual Failure Detector](https://dspace.jaist.ac.jp/dspace/bitstream/10119/4784/1/IS-RR-2004-010.pdf).
The suspicion level of failure is given by a value called *phi*.
The basic idea of the phi failure detector is to express the value of *phi* on a scale that
@ -856,8 +856,8 @@ pekko {
```
You can look at the
@java[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-java)]
@scala[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-scala)]
@java[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-java/)]
@scala[@extref[Cluster with docker-compose example project](samples:pekko-sample-cluster-docker-compose-scala/)]
to see what this looks like in practice.
### Running in Docker/Kubernetes

View file

@ -20,7 +20,7 @@ problems in the format of "recipes". The purpose of this page is to give inspira
various small tasks involving streams. The recipes in this page can be used directly as-is, but they are most powerful as
starting points: customization of the code snippets is warmly encouraged. The recipes can be extended or can provide a
basis for the implementation of other [patterns]($pekko.doc.dns$/docs/pekko-connectors/current/patterns.html) involving
[Pekko Connectors]($pekko.doc.dns$/docs/pekko-connectors/current).
[Pekko Connectors]($pekko.doc.dns$/docs/pekko-connectors/current/).
This part also serves as supplementary material for the main body of documentation. It is a good idea to have this page
open while reading the manual and look for examples demonstrating various streaming concepts

View file

@ -305,7 +305,7 @@ more advanced operators which may need to be debugged at some point.
@@@ div { .group-scala }
The helper trait @scaladoc[stream.stage.StageLogging](StageLogging) is provided to enable you to obtain a @apidoc[event.LoggingAdapter]
The helper trait @scaladoc[stream.stage.StageLogging](org.apache.pekko.stream.stage.StageLogging) is provided to enable you to obtain a @apidoc[event.LoggingAdapter]
inside of a @apidoc[stage.GraphStage] as long as the @apidoc[stream.Materializer] you're using is able to provide you with a logger.
In that sense, it serves a very similar purpose as @apidoc[actor.ActorLogging] does for Actors.

View file

@ -812,7 +812,7 @@ to the @ref:[reference configuration](general/configuration-reference.md#config-
## Example
Ray Roestenburg's example code from his blog, which unfortunately is only available on
[web archive](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-pekko-actors-with-testkit_12.html),
[web archive](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html),
adapted to work with Pekko 2.x.
@@snip [TestKitUsageSpec.scala](/docs/src/test/scala/docs/testkit/TestKitUsageSpec.scala) { #testkit-usage }

View file

@ -232,7 +232,7 @@ support a greater number of shards.
#### Example project for external allocation strategy
@extref[Kafka to Cluster Sharding](samples:pekko-sample-kafka-to-sharding)
@extref[Kafka to Cluster Sharding](samples:pekko-sample-kafka-to-sharding-scala)
is an example project that can be downloaded, and with instructions of how to run, that demonstrates how to use
external sharding to co-locate Kafka partition consumption with shards.
@ -255,7 +255,7 @@ the entity actors for example by defining receive timeout (@apidoc[context.setRe
If a message is already enqueued to the entity when it stops itself the enqueued message
in the mailbox will be dropped. To support graceful passivation without losing such
messages the entity actor can send @apidoc[typed.*.ClusterSharding.Passivate] to the
@apidoc[typed.ActorRef]@scala[[@scaladoc[ShardCommand](cluster.sharding.typed.scaladsl.ClusterSharding.ShardCommand)]]@java[<@javadoc[ShardCommand](pekko.cluster.sharding.typed.javadsl.ClusterSharding.ShardCommand)>] that was passed in to
@apidoc[typed.ActorRef]@scala[[@scaladoc[ShardCommand](org.apache.pekko.cluster.sharding.typed.scaladsl.ClusterSharding.ShardCommand)]]@java[<@javadoc[ShardCommand](pekko.cluster.sharding.typed.javadsl.ClusterSharding.ShardCommand)>] that was passed in to
the factory method when creating the entity. The optional `stopMessage` message
will be sent back to the entity, which is then supposed to stop itself, otherwise it will
be stopped automatically. Incoming messages will be buffered by the `Shard` between reception
@ -829,8 +829,8 @@ as described in @ref:[Shard allocation](#shard-allocation).
## Example project
@java[@extref[Sharding example project](samples:pekko-sample-cluster-sharding-java)]
@scala[@extref[Sharding example project](samples:pekko-sample-cluster-sharding-scala)]
@java[@extref[Sharding example project](samples:pekko-sample-sharding-java)]
@scala[@extref[Sharding example project](samples:pekko-sample-sharding-scala)]
is an example project that can be downloaded, and with instructions of how to run.
This project contains a KillrWeather sample illustrating how to use Cluster Sharding.

View file

@ -79,7 +79,7 @@ Java
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-protocol }
The sender would use its own @scala[`ActorRef[Response]`]@java[`ActorRef<Response>`], which it can access through @scala[@scaladoc[ActorContext.self](actor.typed.scaladsl.ActorContext#self:org.apache.pekko.actor.typed.ActorRef[T])]@java[@javadoc[ActorContext.getSelf()](pekko.actor.typed.javadsl.ActorContext#getSelf())], for the `replyTo`.
The sender would use its own @scala[`ActorRef[Response]`]@java[`ActorRef<Response>`], which it can access through @scala[@scaladoc[ActorContext.self](org.apache.pekko.actor.typed.scaladsl.ActorContext#self:org.apache.pekko.actor.typed.ActorRef[T])]@java[@javadoc[ActorContext.getSelf()](pekko.actor.typed.javadsl.ActorContext#getSelf())], for the `replyTo`.
Scala
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-send }

View file

@ -413,4 +413,4 @@ The @apidoc[SnapshotStoreSpec] in the Persistence TCK provides a capability flag
The following plugins support Replicated Event Sourcing:
* [Pekko Persistence Cassandra]($pekko.doc.dns$/docs/pekko-persistence-cassandra/current/index.html)
* [Pekko Persistence JDBC]($pekko.doc.dns$/docs/pekko-persistence-jdbc/current)
* [Pekko Persistence JDBC]($pekko.doc.dns$/docs/pekko-persistence-jdbc/current/)

View file

@ -16,6 +16,7 @@ package org.apache.pekko
import com.lightbend.paradox.sbt.ParadoxPlugin
import com.lightbend.paradox.sbt.ParadoxPlugin.autoImport._
import com.lightbend.paradox.apidoc.ApidocPlugin
import com.lightbend.paradox.projectinfo.ParadoxProjectInfoPluginKeys.projectInfoVersion
import org.apache.pekko.PekkoParadoxPlugin.autoImport._
import sbt.Keys._
import sbt._
@ -37,14 +38,20 @@ object Paradox {
"extref.github.base_url" -> (GitHub.url(version.value) + "/%s"), // for links to our sources
"extref.samples.base_url" -> s"$pekkoBaseURL/docs/pekko-samples/current/%s",
"pekko.doc.dns" -> s"$pekkoBaseURL",
"scaladoc.pekko.base_url" -> s"$pekkoBaseURL/api/pekko/current/org/apache",
"scaladoc.pekko.base_url" -> s"$pekkoBaseURL/api/pekko/${projectInfoVersion.value}/org/apache",
"scaladoc.pekko.http.base_url" -> s"$pekkoBaseURL/api/pekko-http/current/org/apache",
"scaladoc.org.apache.pekko.base_url" -> s"$pekkoBaseURL/api/pekko/${projectInfoVersion.value}",
"scaladoc.org.apache.pekko.http.base_url" -> s"$pekkoBaseURL/api/pekko-http/current",
"javadoc.java.base_url" -> "https://docs.oracle.com/en/java/javase/11/docs/api/java.base/",
"javadoc.java.link_style" -> "direct",
"javadoc.pekko.base_url" -> s"$pekkoBaseURL/japi/pekko/current/org/apache",
"javadoc.pekko.base_url" -> s"$pekkoBaseURL/japi/pekko/${projectInfoVersion.value}/org/apache",
"javadoc.pekko.link_style" -> "direct",
"javadoc.pekko.http.base_url" -> s"$pekkoBaseURL/japi/pekko-http/current/org/apache",
"javadoc.pekko.http.link_style" -> "frames",
"javadoc.org.apache.pekko.base_url" -> s"$pekkoBaseURL/japi/pekko/${projectInfoVersion.value}",
"javadoc.org.apache.pekko.link_style" -> "direct",
"javadoc.org.apache.pekko.http.base_url" -> s"$pekkoBaseURL/japi/pekko-http/current",
"javadoc.org.apache.pekko.http.link_style" -> "frames",
"javadoc.com.fasterxml.jackson.annotation.base_url" -> "https://javadoc.io/doc/com.fasterxml.jackson.core/jackson-annotations/latest/",
"javadoc.com.fasterxml.jackson.annotation.link_style" -> "direct",
"javadoc.com.fasterxml.jackson.databind.base_url" -> "https://javadoc.io/doc/com.fasterxml.jackson.core/jackson-databind/latest/",

View file

@ -133,6 +133,7 @@ object PekkoBuild {
final val DefaultJavacOptions = Seq("-encoding", "UTF-8", "-Xlint:unchecked", "-XDignore.symbol.file")
lazy val defaultSettings: Seq[Setting[_]] = Def.settings(
projectInfoVersion := (if (isSnapshot.value) "snapshot" else version.value),
Dependencies.Versions,
resolverSettings,
TestExtras.Filter.settings,

View file

@ -48,10 +48,20 @@ site-link-validator {
"https://github.com/"
# Github links generated by sbt-license-report
"http://github.com/"
"https://www.scala-lang.org/api/2.13.10/scala/runtime/AbstractFunction1.html"
"https://www.scala-lang.org/api/2.13.10/scala/runtime/AbstractFunction2.html"
"https://www.scala-lang.org/api/2.13.10/scala/runtime/AbstractFunction3.html"
"https://www.scala-lang.org/api/2.13.10/scala/runtime/AbstractPartialFunction.html"
# Other links generated by sbt-license-report
"http://asm.objectweb.org/license.html"
"http://jackson.codehaus.org"
"https://glassfish.dev.java.net"
"http://beust.com/jcommander"
"http://pholser.github.com/jopt-simple"
"http://pojosr.googlecode.com/"
"http://team.ops4j.org/wiki/display/ops4j/Tinybundles"
"https://www.scala-lang.org/api/2.13.11/scala/runtime/AbstractFunction1.html"
"https://www.scala-lang.org/api/2.13.11/scala/runtime/AbstractFunction2.html"
"https://www.scala-lang.org/api/2.13.11/scala/runtime/AbstractFunction3.html"
"https://www.scala-lang.org/api/2.13.11/scala/runtime/AbstractPartialFunction.html"
# Bug, see https://github.com/scala/bug/issues/12807 and https://github.com/lampepfl/dotty/issues/17973
"https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/file/StandardOpenOption$.html"
]
non-https-whitelist = [
@ -118,5 +128,6 @@ site-link-validator {
"http://www.scalacheck.org"
"http://www.scalatest.org"
"http://www.slf4j.org"
"http://www.eclipse.org/org/documents/edl-v10.php"
]
}