Skip to main content

使用S3托管静态网站

· 2 min read
orange
programmer on jvm platform

静态网站是指不需要后端服务的网站, 比如个人博客, 个人简历, 个人作品集等.
这些网站的内容都是静态的, 不需要后端服务, 只需要将静态文件托管到一个服务器上即可.
本文将介绍如何使用AWS S3托管静态网站.

使用Gradle的JavaPackager插件将Java应用打包成二进制文件

· 6 min read
orange
programmer on jvm platform

在之前的文章中, 我介绍过如何通过graalvmjava应用打包成二进制文件, 但是这种方式需要在graalvm中安装native-image 工具, 并且需要在graalvm中编译java应用, 这样的方式对于java应用的开发者来说, 有一定的门槛, 而且也不够灵活. 并且构建过程中由于代码没有满足graalvm的要求, 例如使用了java的反射机制, 会导致构建失败( graalvm需要在编译时就知道这些信息来生成) 下面将介绍另一种方式, 通过gradleJavaPackager插件来构建二进制文件.

解决在Kotlin Coroutines中的命令行调用中出现Stream Closed异常

· 3 min read
orange
programmer on jvm platform

之前有一个服务内部需要调用外部程序(rclone), 于是我写了一个类来封装命令行调用, 该类主要是基于kotlinx.coroutines 来实现的.
代码如下:

CommandExecutorImpl.kt
import java.io.IOException
import java.io.InputStream

class CommandExecutorImpl : CommandExecutor, LogCapability {

override suspend fun execute(options: CommandExecutionOptions) =
coroutineScope {
val command: String = options.command.joinToString(separator = " ")
logger.info("$ {}", command)
val process: Process = createProcess(options)

val asyncReadStdOut = asyncRead(input = process.inputStream, consume = options.onNewStdoutRead)
val asyncReadStderr = asyncRead(input = process.errorStream, consume = options.onNewStderrRead)
try {
while (process.isAlive) {
delay(500)
}
if (process.exitValue() != 0) {
throw IllegalStateException("Process exited with non-zero exit code")
}
} finally {
// https://kotlinlang.org/docs/cancellation-and-timeouts.html#run-non-cancellable-block
withContext(NonCancellable) {
process.destroy()
asyncReadStdOut.cancelAndJoin()
asyncReadStderr.cancelAndJoin()
}
}
}

private suspend fun createProcess(options: CommandExecutionOptions): Process =
withContext(Dispatchers.IO) {
Runtime.getRuntime().exec(options.command.toTypedArray())
}

private fun CoroutineScope.asyncRead(input: InputStream, consume: suspend (String) -> Unit): Job =
launch {
try {
input.bufferedReader()
.lineSequence()
.asFlow()
.collect { line ->
consume(line)
}
} catch (ex: IOException) {
logger.warn("Error while reading from process", ex)
throw ex
}
}

companion object : LogCapability

}

最近我发现在使用该类时, 有时会抛出java.io.IOException: Stream closed异常
异常栈如下:

14:10:38.016 [DefaultDispatcher-worker-117] WARN com.fastonetech.billing.sync.infra.command.CommandExecutorImpl - Error while reading from process
java.io.IOException: Stream closed
at java.base/java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:168)
at java.base/java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:270)
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:313)
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:188)
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:177)
at java.base/java.io.BufferedReader.fill(BufferedReader.java:162)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:329)
at java.base/java.io.BufferedReader.readLine(BufferedReader.java:396)
at kotlin.io.LinesSequence$iterator$1.hasNext(ReadWrite.kt:79)
at kotlinx.coroutines.flow.FlowKt__BuildersKt$asFlow$$inlined$unsafeFlow$5.collect(SafeCollector.common.kt:114)
at com.fastonetech.billing.sync.infra.command.CommandExecutorImpl$asyncRead$1.invokeSuspend(CommandExecutorImpl.kt:58)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:106)
at kotlinx.coroutines.internal.LimitedDispatcher.run(LimitedDispatcher.kt:42)
at kotlinx.coroutines.scheduling.TaskImpl.run(Tasks.kt:95)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:570)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:750)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:677)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:664)

下面将解决该问题的原因和解决方案.

为Docusaurus中的Blog启用评论功能

· 4 min read
orange
programmer on jvm platform

最近想在Docusaurus中启用评论功能, 但是官方并没有提供这个功能, 所以只能自己动手了.

目前的解决方案是通过Giscus来实现, 它是一个基于GithubDiscussions的评论系统实现的.
Discussions需要依赖Github账号来进行评论, 所以后续需要一个公共的仓库来存放评论数据.
如果你的博客是私有项目, 可以考虑创建一个新的公共的仓库用于存放评论数据, 这样可以确保原来的项目还可以为私有的.

以下内容会假设你拥有Github相关知识

Gitlab中常见的标签

· 3 min read
orange
programmer on jvm platform

Gitlab中的标签是一个非常有用的功能, 该功能可以帮助我们更好的管理ISSUE.
下面介绍一下Gitlab中常见的标签的定义和其使用希望能够对你有所帮助.

解决elm依赖下载失败的问题

· 4 min read
orange
programmer on jvm platform

最近在本地构建open-radiant项目.
该项目是JetBrains开源的一个项目, 用于生成AI艺术图片, 在线演示地址为: code2art
在构建的过程中遇到了一些问题.
其中的一个问题是当执行elm make时依赖下载失败, 日志如下:

Starting downloads...

● elm/json 1.1.3
● elm-community/list-extra 8.2.2
● elm/random 1.0.0
● elm/file 1.0.5
● elm/virtual-dom 1.0.2
● elm/parser 1.1.0
● rtfeldman/elm-iso8601-date-strings 1.1.3
● elm/url 1.0.0
● elm-community/random-extra 3.1.0
● elm-explorations/webgl 1.1.1
● elm/core 1.0.2
✗ elm/http 2.0.0
✗ owanturist/elm-union-find 1.0.0
✗ elm/bytes 1.0.8
✗ elm/svg 1.0.1
✗ avh4/elm-color 1.0.0
✗ elm/time 1.0.0
✗ elm-community/json-extra 4.2.0
✗ fredcy/elm-parseint 2.0.1
✗ noahzgordon/elm-color-extra 1.0.2
✗ elm/html 1.0.0
✗ elm/browser 1.0.2
✗ newlandsvalley/elm-binary-base64 1.0.3
✗ elm-community/easing-functions 2.0.0

Dependency problem!
-- PROBLEM DOWNLOADING PACKAGE -------------------------------------------------

I was trying to download the source code for avh4/elm-color 1.0.0, so I tried to
fetch:

https://github.com/avh4/elm-color/zipball/1.0.0/

But my HTTP library is giving me the following error message:

ConnectionTimeout

Are you somewhere with a slow internet connection? Or no internet? Does the link
I am trying to fetch work in your browser? Maybe the site is down? Does your
internet connection have a firewall that blocks certain domains? It is usually
something like that!

node.js v17及以上版本使用openssl v3.0引发的哈希算法错误及其解决方法

· 3 min read
orange
programmer on jvm platform

最近在本地构建open-radiant项目.
该项目是JetBrains开源的一个项目, 用于生成AI艺术图片, 在线演示地址为: code2art
在构建的过程中遇到了一些问题.

其中的一个问题是当执行npm start时报错, 相关错误信息如下:


> jb-animation-generator@1.0.0 start
> ./node_modules/.bin/webpack-dev-server --mode=development

ℹ 「wds」: Project is running at http://localhost:8080/
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /home/orange/Documents/Project/Github/open-radiant
node:internal/crypto/hash:71
this[kHandle] = new _Hash(algorithm, xofLen);
^

Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:71:19)
at Object.createHash (node:crypto:133:10)
at module.exports (/home/orange/Documents/Project/Github/open-radiant/node_modules/webpack/lib/util/createHash.js:135:53)
at NormalModule._initBuildHash (/home/orange/Documents/Project/Github/open-radiant/node_modules/webpack/lib/NormalModule.js:417:16)
at handleParseError (/home/orange/Documents/Project/Github/open-radiant/node_modules/webpack/lib/NormalModule.js:471:10)
at /home/orange/Documents/Project/Github/open-radiant/node_modules/webpack/lib/NormalModule.js:503:5
at /home/orange/Documents/Project/Github/open-radiant/node_modules/webpack/lib/NormalModule.js:358:12
at /home/orange/Documents/Project/Github/open-radiant/node_modules/loader-runner/lib/LoaderRunner.js:373:3
at iterateNormalLoaders (/home/orange/Documents/Project/Github/open-radiant/node_modules/loader-runner/lib/LoaderRunner.js:214:10)
at Array.<anonymous> (/home/orange/Documents/Project/Github/open-radiant/node_modules/loader-runner/lib/LoaderRunner.js:205:4) {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}

Node.js v18.12.1

问题原因

经过网上的一番搜索, 发现这个问题是由于node.js的版本升级到v17及以上版本, 而openssl的版本升级到v3.0引起的.

openssl是一个开源的安全套接字层密码库, 用于提供加密和认证服务.
node.jsv17及以上版本中使用了opensslv3.0版本, 该版本中移除了一些哈希算法, 导致在使用这些算法时报错.

解决方案

这个问题的解决解决方案有以下几种:

  • 降低node.js的版本到v16及以下版本
  • npm start之前先执行export NODE_OPTIONS=--openssl-legacy-provider
    通过该环境变量可以启用openssllegacy provider,支持一些旧的哈希算法.
  • 升级相关依赖包到最新版本,可能会修复这个问题.

参考资料

通过延长olcIdleTimeout以减少nslcd中的Can't contact LDAP server日志报错

· 7 min read
orange
programmer on jvm platform

客户用例执行失败, 其猜测可能是nslcd服务中的Can't contact LDAP server相关报错导致其生产用例执行失败.

nslcd日志如下:

[fastone@layout01 ~]$ sudo journalctl -t nslcd| tail --line 20
Mar 28 11:58:24 layout01 nslcd[25607]: [debc9e] <group="fsadmin"> connected to LDAP server ldap://172.20.3.126:389
Mar 28 11:59:17 layout01 nslcd[25607]: [fe8aa7] <passwd=2032> ldap_search_ext() failed: Can't contact LDAP server: Broken pipe
Mar 28 11:59:17 layout01 nslcd[25607]: [fe8aa7] <passwd=2032> no available LDAP server found, sleeping 1 seconds
Mar 28 11:59:18 layout01 nslcd[25607]: [fe8aa7] <passwd=2032> connected to LDAP server ldap://172.20.3.126:389
Mar 28 12:00:01 layout01 nslcd[25607]: [272b88] <group/member="root"> ldap_result() failed: Can't contact LDAP server
Mar 28 12:00:36 layout01 nslcd[25607]: [66b17f] <group=2001> ldap_search_ext() failed: Can't contact LDAP server: Broken pipe
Mar 28 12:00:36 layout01 nslcd[25607]: [66b17f] <group=2001> no available LDAP server found, sleeping 1 seconds
Mar 28 12:00:37 layout01 nslcd[25607]: [66b17f] <group=2001> connected to LDAP server ldap://172.20.3.126:389
Mar 28 12:00:38 layout01 nslcd[25607]: [a15030] <passwd=2004> ldap_search_ext() failed: Can't contact LDAP server: Broken pipe
Mar 28 12:00:38 layout01 nslcd[25607]: [a15030] <passwd=2004> no available LDAP server found, sleeping 1 seconds
Mar 28 12:00:39 layout01 nslcd[25607]: [a15030] <passwd=2004> connected to LDAP server ldap://172.20.3.126:389
Mar 28 12:00:39 layout01 nslcd[25607]: [9b7b93] <passwd=2001> ldap_result() failed: Can't contact LDAP server
Mar 28 12:00:54 layout01 nslcd[25607]: [97bb68] <passwd=2011> ldap_result() failed: Can't contact LDAP server
Mar 28 12:01:36 layout01 nslcd[25607]: [005d16] <group=2011> ldap_result() failed: Can't contact LDAP server
Mar 28 12:03:39 layout01 nslcd[25607]: [b9081a] <group="fsadmin"> ldap_search_ext() failed: Can't contact LDAP server: Broken pipe
Mar 28 12:03:39 layout01 nslcd[25607]: [b9081a] <group="fsadmin"> no available LDAP server found, sleeping 1 seconds
Mar 28 12:03:40 layout01 nslcd[25607]: [b9081a] <group="fsadmin"> connected to LDAP server ldap://172.20.3.126:389
Mar 28 12:03:47 layout01 nslcd[25607]: [0f614b] <group/member="root"> ldap_search_ext() failed: Can't contact LDAP server: Broken pipe
Mar 28 12:03:47 layout01 nslcd[25607]: [0f614b] <group/member="root"> no available LDAP server found, sleeping 1 seconds
Mar 28 12:03:48 layout01 nslcd[25607]: [0f614b] <group/member="root"> connected to LDAP server ldap://172.20.3.126:389

从日志中可以发现nslcd服务经常出现Can't contact LDAP server.

问题原因

触发了ldap-server的连接超时

这个问题的原因是因为触发了ldap-server的超时时间, 从而导致nslcd服务中的Can't contact LDAP server相关报错.
ldap-server的连接超时时间我们设置的默认为30s为了确保连接不会被一直占用从而导致服务端负载过高.
但是频繁出现这个错误会让客户认为是我们的ldap-server出现问题导致其用例失败, 我们需要延长超时时间来避免客户的误解.

解决方案

为了解决这个问题, 我们需要修改ldap-server的超时时间.
ldap-server中的超时时间是通过olcIdleTimeout来设置的.
通过修改此值可以延长ldap-server的连接超时时间.
需要注意的是, olcIdleTimeout的单位是秒.
另外的一个注意事项是修改此值需要通过ldap中的config数据库的admin用户来修改.
修改完成之后, 我们需要重启ldap-server服务.

创建change-timeout.ldif文件

先创建如下文件, 为了方便, 我们将超时时间设置为12h.

change-timeout.ldif
dn: cn=config
changetype: modify
replace: olcIdleTimeout
olcIdleTimeout: 43200

通过ldapmodify命令修改超时时间

我们需要连接ldap-server并执行ldapmodify命令来修改ldap连接超时时间.
然后执行如下命令

需要注意的是, 执行ldapmodify命令bind的用户为cn=admin,cn=config用户, 该用户是config数据库的admin用户.

ldapmodify -x -D cn=admin,cn=config -w <password-of-config-admin> -f change-timeout.ldif

该命令将会输出如下内容

modifying entry "cn=config"

当看到上面的输出时, 说明超时时间已经修改成功.

确认超时时间是否修改成功

执行如下命令

ldapsearch -x -D cn=admin,cn=config -w <password-of-config-admin> -b cn=config|grep olcIdleTimeout

该命令将会输出如下内容

olcIdleTimeout: 43200
olcAttributeTypes: ( OLcfgGlAt:18 NAME 'olcIdleTimeout' SYNTAX OMsInteger SING
PendingAuth $ olcDisallows $ olcGentleHUP $ olcIdleTimeout $ olcIndexSubstrIf

通过输出的内容可以看到olcIdleTimeout的值已经被修改为43200.

重启ldap-server

为了确保配置生效, 我们需要重启ldap服务.
不同的ldap服务的重启方式不同, 这里以ldap容器为例.

docker restart <ldap-container>

验证配置是否生效

在安装有nslcd服务的机器上执行如下命令.
两个命令中间间隔35s. 如果配置没生效那么30s后会再次出现Can't contact LDAP server的报错.
因为我们一开始默认的超时时间是30s, 所以我们需要间隔35s来验证配置是否生效.

getent passwd -s ldap && sleep 35s && getent passwd -s ldap

执行期间我们需要观察nslcd服务的日志.
最好开两个终端, 一个执行命令, 一个查看日志. 这样可以更加直观的看到日志.

journalctl -u nslcd -f

如果配置生效那么我们将不会看到Can't contact LDAP server的报错.

备注

默认超时时间的配置文件

change-timeout.ldif
dn: cn=config
changetype: modify
replace: olcIdleTimeout
olcIdleTimeout: 30

查看系统中的nslcd服务的配置文件

cat /etc/nslcd.conf

查看ldap-serverolcRoot用户的信息

olcRootldap-server的超级管理员, 通过该用户可以对ldap-server进行管理.

执行如下命令查看ldap-serverolcRoot用户的信息.

cd /etc/ldap/slapd.d/cn=config && grep -r 'olcRoot' *

该命令将会输出如下参考内容

olcDatabase={0}config.ldif:olcRootDN: cn=admin,cn=config
olcDatabase={0}config.ldif:olcRootPW:: xxxx
olcDatabase={1}mdb.ldif:olcRootDN: cn=admin,dc=demo,dc=com
olcDatabase={1}mdb.ldif:olcRootPW:: xxxx

通过上述输出可以看到有两个olcRoot用户, 一个是cn=admin,cn=config, 另一个是cn=admin,dc=demo,dc=com.
有这两个用户的原因是因为其是不同olcDatabaseolcRoot用户.
olcDatabase={0}config.ldifldap-server的配置数据库. 其主要存放了ldap-server的配置信息.
olcDatabase={1}mdb.ldifldap-server的数据数据库. 其主要存放了ldap-server的数据信息, 该数据是给我们使用的.

确保fd数量可用

因为上面修改了ldap-server的超时时间, 所以我们需要确保ldap-serverfd数量足够.

参考资料

grpc服务tls连接握手失败问题排查

· 3 min read
orange
programmer on jvm platform

近期在访问通过公网暴露的grpc服务时连接时报错, 异常信息如下

Exception in thread "main" io.grpc.StatusException: UNAVAILABLE: io exception
Channel Pipeline: [SslHandler#0, ProtocolNegotiators$ClientTlsHandler#0, WriteBufferingAndExceptionHandler#0, DefaultChannelPipeline$TailContext#0]
at io.grpc.Status.asException(Status.java:554)
at io.grpc.kotlin.ClientCalls$rpcImpl$1$1$1.onClose(ClientCalls.kt:296)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:576)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:757)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:736)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 485454502f312e312034303320466f7262696464656e0a436f6e74656e742d547970653a20746578742f68746d6c3b20636861727365743d7574662d380a5365727665723a2041444d2f322e312e310a436f6e6e656374696f6e3a20636c6f73650a436f6e74656e742d4c656e6774683a203531320a0a3c68746d6c3e0a3c686561643e0a3c6d65746120687474702d65717569763d22436f6e74656e742d547970652220636f6e74656e743d22746578746d6c3b636861727365743d5554462d3822202f3e0a2020203c7374796c653e626f64797b6261636b67726f756e642d636f6c6f723a234646464646467d3c2f7374796c653e200a3c7469746c653e7a687935342d48473130302d32e99d9ee6b395e998bbe696ad3c2f7469746c653e0a20203c736372697074206c616e67756167653d226a6176617363726970742220747970653d22746578742f6a617661736372697074223e0a20202020202020202077696e646f772e6f6e6c6f6164203d2066756e6374696f6e202829207b200a2020202020202020202020646f63756d656e742e676574456c656d656e744279496428226d61696e4672616d6522292e7372633d2022687474703a2f2f3230332e39332e3137302e3231393a393038302f6572726f722e68746d6c223b200a2020202020202020202020207d0a3c2f7363726970743e2020200a3c2f686561643e0a20203c626f64793e0a2020202020203c696672616d652069643d226d61696e4672616d6522207372633d2222206672616d65626f726465723d2230222077696474683d223130302522206865696768743d2231303025223e3c2f696672616d653e0a20203c2f626f64793e0a3c2f68746d6c3e
at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1215)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1285)
at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:529)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:468)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:290)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more