Docs: link pages with TLS (#28258)

This commit is contained in:
Enno 2019-11-27 17:33:44 +01:00 committed by Johan Andrén
parent 6d893fb571
commit 4946c957eb
10 changed files with 18 additions and 18 deletions

View file

@ -233,4 +233,4 @@ akka.extensions = ["akka.cluster.pubsub.DistributedPubSub"]
As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
In other words, messages can be lost over the wire.
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](http://doc.akka.io/docs/akka-stream-kafka/current/home.html).
If you are looking for at-least-once delivery guarantee, we recommend [Alpakka Kafka](https://doc.akka.io/docs/alpakka-kafka/current/).

View file

@ -13,7 +13,7 @@ asynchronous. This effort has been undertaken to ensure that all functions are
available equally when running within a single JVM or on a cluster of hundreds
of machines. The key for enabling this is to go from remote to local by way of
optimization instead of trying to go from local to remote by way of
generalization. See [this classic paper](http://doc.akka.io/docs/misc/smli_tr-94-29.pdf)
generalization. See [this classic paper](https://doc.akka.io/docs/misc/smli_tr-94-29.pdf)
for a detailed discussion on why the second approach is bound to fail.
## Ways in which Transparency is Broken

View file

@ -1,7 +1,7 @@
# Persistence - Building a storage backend
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](http://akka.io/community/)
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](https://akka.io/community/)
This documentation described how to build a new storage backend.
Applications can provide their own plugins by implementing a plugin API and activating them by configuration.

View file

@ -2,7 +2,7 @@
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](http://akka.io/community/)
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](https://akka.io/community/)
Two popular plugins are:

View file

@ -43,7 +43,7 @@ query types for the most common query scenarios, that most journals are likely t
## Read Journals
In order to issue queries one has to first obtain an instance of a `ReadJournal`.
Read journals are implemented as [Community plugins](http://akka.io/community/#plugins-to-akka-persistence-query), each targeting a specific datastore (for example Cassandra or JDBC
Read journals are implemented as [Community plugins](https://akka.io/community/#plugins-to-akka-persistence-query), each targeting a specific datastore (for example Cassandra or JDBC
databases). For example, given a library that provides a `akka.persistence.query.my-read-journal` obtaining the related
journal is as simple as:
@ -56,7 +56,7 @@ Java
Journal implementers are encouraged to put this identifier in a variable known to the user, such that one can access it via
@scala[`readJournalFor[NoopJournal](NoopJournal.identifier)`]@java[`getJournalFor(NoopJournal.class, NoopJournal.identifier)`], however this is not enforced.
Read journal implementations are available as [Community plugins](http://akka.io/community/#plugins-to-akka-persistence-query).
Read journal implementations are available as [Community plugins](https://akka.io/community/#plugins-to-akka-persistence-query).
### Predefined queries
@ -273,7 +273,7 @@ Java
## Query plugins
Query plugins are various (mostly community driven) `ReadJournal` implementations for all kinds
of available datastores. The complete list of available plugins is maintained on the Akka Persistence Query [Community Plugins](http://akka.io/community/#plugins-to-akka-persistence-query) page.
of available datastores. The complete list of available plugins is maintained on the Akka Persistence Query [Community Plugins](https://akka.io/community/#plugins-to-akka-persistence-query) page.
The plugin for LevelDB is described in @ref:[Persistence Query for LevelDB](persistence-query-leveldb.md).

View file

@ -12,7 +12,7 @@ This documentation page touches upon @ref[Akka Persistence](persistence.md), so
## Introduction
When working on long running projects using @ref:[Persistence](persistence.md), or any kind of [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) architectures,
When working on long running projects using @ref:[Persistence](persistence.md), or any kind of [Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) architectures,
schema evolution becomes one of the more important technical aspects of developing your application.
The requirements as well as our own understanding of the business domain may (and will) change in time.
@ -40,7 +40,7 @@ In recent years we have observed a tremendous move towards immutable append-only
the prime technique successfully being used in these settings. For an excellent overview why and how immutable data makes scalability
and systems design much simpler you may want to read Pat Helland's excellent [Immutability Changes Everything](http://cidrdb.org/cidr2015/Papers/CIDR15_Paper16.pdf) whitepaper.
Since with [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) the **events are immutable** and usually never deleted the way schema evolution is handled
Since with [Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) the **events are immutable** and usually never deleted the way schema evolution is handled
differs from how one would go about it in a mutable database setting (e.g. in typical CRUD database applications).
The system needs to be able to continue to work in the presence of "old" events which were stored under the "old" schema.
@ -92,11 +92,11 @@ Binary serialization formats that we have seen work well for long-lived applicat
single fields focused like in protobuf or thrift, and usually requires using some kind of schema registry.
Users who want their data to be human-readable directly in the write-side
datastore may opt to use plain-old [JSON](http://json.org) as the storage format, though that comes at a cost of lacking support for schema
datastore may opt to use plain-old [JSON](https://json.org) as the storage format, though that comes at a cost of lacking support for schema
evolution and relatively large marshalling latency.
There are plenty excellent blog posts explaining the various trade-offs between popular serialization formats,
one post we would like to highlight is the very well illustrated [Schema evolution in Avro, Protocol Buffers and Thrift](http://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html)
one post we would like to highlight is the very well illustrated [Schema evolution in Avro, Protocol Buffers and Thrift](https://martin.kleppmann.com/2012/12/05/schema-evolution-in-avro-protocol-buffers-thrift.html)
by Martin Kleppmann.
### Provided default serializers
@ -451,7 +451,7 @@ Java
This technique only applies if the Akka Persistence plugin you are using provides this capability.
Check the documentation of your favourite plugin to see if it supports this style of persistence.
If it doesn't, you may want to skim the [list of existing journal plugins](http://akka.io/community/#journal-plugins), just in case some other plugin
If it doesn't, you may want to skim the [list of existing journal plugins](https://akka.io/community/#journal-plugins), just in case some other plugin
for your favourite datastore *does* provide this capability.
@@@

View file

@ -131,11 +131,11 @@ An end-of-frame marker, e.g. end line `\n`, can do framing via `Framing.delimite
Or a length-field can be used to build a framing protocol.
There is a bidi implementing this protocol provided by `Framing.simpleFramingProtocol`,
see
@scala[[ScalaDoc](http://doc.akka.io/api/akka/current/akka/stream/scaladsl/Framing$.html)]
@scala[[ScalaDoc](https://doc.akka.io/api/akka/current/akka/stream/scaladsl/Framing$.html)]
@java[[Javadoc](http://doc.akka.io/japi/akka/current/akka/stream/javadsl/Framing.html#simpleFramingProtocol-int-)]
for more information.
@scala[[JsonFraming](http://doc.akka.io/api/akka/current/akka/stream/scaladsl/JsonFraming$.html)]@java[[JsonFraming](http://doc.akka.io/japi/akka/current/akka/stream/javadsl/JsonFraming.html#objectScanner-int-)] separates valid JSON objects from incoming `ByteString` objects:
@scala[[JsonFraming](https://doc.akka.io/api/akka/current/akka/stream/scaladsl/JsonFraming$.html)]@java[[JsonFraming](https://doc.akka.io/japi/akka/current/akka/stream/javadsl/JsonFraming.html#objectScanner-int-)] separates valid JSON objects from incoming `ByteString` objects:
Scala
: @@snip [JsonFramingSpec.scala](/akka-stream-tests/src/test/scala/akka/stream/scaladsl/JsonFramingSpec.scala) { #using-json-framing }

View file

@ -7,7 +7,7 @@ Classic Pub Sub can be used by leveraging the `.toClassic` adapters until @githu
Until the new Distributed Publish Subscribe API, see @github[#26338](#26338),
you can use Classic Distributed Publish Subscribe
[coexisting](coexisting.md) with new Cluster and actors. To do this, add following dependency in your project:
@ref:[coexisting](coexisting.md) with new Cluster and actors. To do this, add following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"

View file

@ -228,7 +228,7 @@ In the context of the IoT system, this guide introduced the following concepts,
To continue your journey with Akka, we recommend:
* Start building your own applications with Akka, make sure you [get involved in our amazing community](http://akka.io/get-involved) for help if you get stuck.
* Start building your own applications with Akka, make sure you [get involved in our amazing community](https://akka.io/get-involved) for help if you get stuck.
* If youd like some additional background, and detail, read the rest of the @ref:[reference documentation](../actors.md) and check out some of the @ref:[books and videos](../../additional/books.md) on Akka.
* If you are interested in functional programming, read how actors can be defined in a @ref:[functional style](../actors.md#functional-style). In this guide the object-oriented style was used, but you can mix both as you like.

View file

@ -446,11 +446,11 @@ project-info {
]
api-docs: [
{
url: ${project-info.scaladoc}"slf4j/index.html"
url: ${project-info.scaladoc}"event/slf4j/index.html"
text: "API (Scaladoc)"
}
{
url: ${project-info.javadoc}"?akka/slf4j/package-summary.html"
url: ${project-info.javadoc}"?akka/event/slf4j/package-summary.html"
text: "API (Javadoc)"
}
]